Hybrid and multi-cloud architectures have become the de-facto standard among organisations, with more than half (53 percent) embracing them as the most popular form of deployment.
Surveying over 250 worldwide business executives and IT professionals from a diverse group of technical backgrounds, data virtualisation leader Denodo’s third annual cloud usage survey revealed that hybrid cloud configurations are the centre of all cloud deployments at 42 percent, followed by public (18 percent) and private clouds (17 percent). The advantages of hybrid cloud and multi-cloud configurations according to respondents include the ability to diversify spend and skills, build resiliency, and cherry-pick features and capabilities depending on each cloud service provider’s particular strengths, all while avoiding the dreaded vendor lock-in.
The use of container technologies increased by 50 percent year-over-year indicating a growing trend to use it for scalability and portability to the cloud. DevOps professionals continue to look to containerisation for production, because it enables reproducibility and the ability to automate deployments. About 80 percent of the respondents are leveraging some type of container deployment, with Docker being the most popular (46 percent) followed by Kubernetes (40 percent) which is gaining steam, as is evident from the consistent support of all the key cloud providers.
A foundational metric for demonstrating cloud adoption maturity, more than three quarters (78 percent) of all respondents are running some kind of a workload in the cloud. Over the past year, there has been a positive reinforcement of cloud adoption with at least a 10 percent increase across beginners, intermediate, and advanced adopters. About 90 percent of those embracing cloud are selecting Amazon Web Services (AWS) and Microsoft Azure as their service providers, demonstrating the continued dominance of these front-runners. But users are not just lifting their on-premises applications and shifting them to either of or both of these clouds; more than a third (35 percent) said they would re-architect their applications for the best-fit cloud architecture.
For the most popular cloud initiative, analytics and BI came out at the top with two out of three (66 percent) participants claiming to use it for big data analytics projects. AWS, Azure, and Google Cloud each has its own specific strengths, but analytics surfaced as the top use case across all three of them. This use case was followed closely by both logical data warehouse (43 percent) and data science (41 percent) in the cloud.
When it comes to data formats, two thirds of the data being used is still in structured format (68 percent), while there is a vast pool of unstructured data that is growing in importance. Cloud object storage (47 percent) along with SaaS data (44 percent) are frequently used to maximise ease of computation and performance optimisation.
Further, cloud marketplaces are growing at a phenomenal speed and are becoming more popular. Half (50 percent) of those surveyed are leveraging cloud marketplaces with utility/pay-as-you-go pricing being the most popular incentive (19 percent) followed by its self-service capability/ability to minimise IT dependency (13 percent). Avoiding a long-term commitment also played a role (6 percent).
“As data’s center of gravity shifts to the cloud, hybrid cloud and multi-cloud architectures are becoming the basis of data management, but the challenge of integrating data in the cloud has almost doubled (43 percent),” said Ravi Shankar, SVP and CMO of Denodo. “Today, users are looking to simplify cloud data integration in a hybrid/multi-cloud environment without having to depend on heavy duty data migration or replication which may be why almost 50 percent of respondents said they are considering data virtualisation as a key part of their cloud integration and migration strategy.”
Kofax has published the 2020 Intelligent Automation Benchmark Report, a study conducted by Forrester Consulting and commissioned by Kofax. The report finds while many enterprises have prioritised automation, they’re struggling to scale and achieve hyperautomation. It also finds taking an integrated approach to intelligent automation can result in accelerated ROI, enhanced customer success and employee satisfaction, and reduced technical debt.
“The 2020 Benchmark data clearly tells a story of enterprises moving beyond siloed, ad hoc automation and toward integrated, single-vendor Intelligent Automation platforms. Boards and executives understand the value of a single platform that can digitally transform a multitude of processes while providing an open architecture capable of easily connecting to third-party applications,” says Chris Huff, Kofax’s Chief Strategy Officer. “Our aggressive R&D has been aimed at cloud-enablement, embedding AI to handle unstructured data, and orchestrating downstream workflows – allowing customers to rapidly drive increased capacity, productivity, employee satisfaction and customer success.”
For the second consecutive year, The Kofax 2020 Benchmark Report reveals organisations are making considerable headway automating key front- and back-office operations:
Hyperautomation accelerates business transformation and success by enabling IT and citizen developers to harness complementary integrated automation technologies – including process discovery, robotic process automation, business process management, advanced analytics, business rules, embedded artificial intelligence and machine learning. Despite automation gains, the report points to several factors preventing organisations from achieving hyperautomation.
Nearly all decision makers surveyed (98%) report adopting an unintegrated approach to automation resulted in unanticipated challenges.
Two of the most significant challenges reported were high technical debt (46%) and delayed success (35%).
Nearly half (45%) of enterprises report they’ve taken ad hoc approaches, automating their many use cases via siloed solutions from a multitude of vendors.
99% of decision makers believe there would be considerable value in working with a single automation vendor and automation platform.
52% of decision makers cite improved customer experience as the top benefit of leveraging a single-vendor platform.
78% of employees say a single-vendor automation platform provides greater efficiency of their daily tasks, and 65 percent say it allows them to be more productive.
SolarWinds IT Trends Report 2020: The Universal Language of IT examines the evolving role of technology in business and the breakdown of traditional IT siloes.
SolarWinds has released the findings ofSolarWinds IT Trends Report 2020: The Universal Language of IT. This year’s annual report studies how the breakdown of traditional IT siloes has affected technology professionals across on-premises, cloud, and hybrid environments. While the survey data was gathered before the COVID-19 (or Coronavirus) pandemic elevated technology professionals as essential workers, the findings are underscored by this challenging period of remote work and increased burdens on the IT environments keeping global organisations operating at full capacity. The study reveals a new reality for tech pros where roles have converged yet budgets remain focused less on emerging technologies and more on infrastructure, hybrid IT, expanding their charter from operations to optimisation.
The “universal language of IT” encapsulates the evolving role of technology in business, and the tech pros’ responsibility for ensuring overall uptime, availability, and performance as well as greater partnership with leadership to drive business success. As cloud computing continues to grow, tech pros say they are increasingly prioritising areas like hybrid infrastructure management, application performance management (APM), and security management to optimise delivery for the organisations they serve. With the convergence of IT roles in response to the interconnected nature of modern IT environments—and now the need to support a new or larger remote workforce—tech pros are also setting their sights on non-technical and interpersonal skills to ensure teamwork and communication with business leaders increases their fluency in the universal language of IT. Skills development is needed across both technical and non-technical areas to remain successful in today’s environments.
“For years we’ve been talking about hybrid IT and what it means for tech pros; in our seventh year of the IT Trends Report, we see the effects of hybrid IT in breaking down traditional siloes and bringing core competencies across on-premises and cloud environments together,” said Joe Kim, executive vice president and global chief technology officer, SolarWinds. “Especially now, when organisations worldwide are facing new challenges and uncertainty, we must take this reality seriously, focusing on skills development and readiness in key areas like security, cloud infrastructure, and application monitoring. While IT continues to be a main driver of business importance, tech pros have an opportunity to help reassure the business and focus on effectively communicating performance now and into the future.”
“More than ever before, technology professionals must work alongside business leaders to meet organisational goals while also investing time and energy into cultivating the necessary skills to drive business success,” added Kim. “At SolarWinds, we focus on enabling the tech pro with easy to use, affordable products, but we also understand our customers often need more from our partnership. That’s why we also make meaningful investments in providing a wide range of training resources—many of which have been virtual since their inception—and an online user community where they can connect with their peers. We have many ways we do this: our Customer Success Center, MSP Institute, SolarWinds Academy, our THWACK® community of over 150,000 registered members and yearly virtual learning event, THWACKcamp™, our bi-annual customer event SolarWinds Empower MSP, and educational digital programming like SolarWinds Lab™ and TechPod™. Each of these avenues serves to help make life easier for tech pros so they can drive even more success for the businesses they support.”
2020 Key Findings
SolarWinds IT Trends Report 2020: The Universal Language of IT explores priority areas tech pros manage in a world were roles have converged, and how this reality is affecting skillsets across IT departments and in non-technical areas. Key findings show:
Tech pros are focusing less on emerging technology like artificial intelligence (AI) and edge, and more on hybrid IT and security.
Today’s hybrid IT reality has created a universal language of IT where tech pro roles and siloes converge, and complexities are exacerbated by flat to shrinking budgets and a lack of qualified personnel.
Many personnel and skills issues relate to growing areas like APM and security and compliance.
Tech pros need to develop nontechnical skills to operate within the universal language of IT reality where cross-functional and business-level communication is necessary.
Facing market uncertainty brought on by the global pandemic, B2B organisations must adapt to changes in how they connect with customers as 54% of leaders in IT, marketing and ecommerce roles define their company’s customer relationships as strained, developing or non-existent according to a new survey from Episerver, the customer-centric digital experience company.
In turn, delivering relevant, personalised digital experiences has emerged as a top priority and direct selling as the most significant opportunity for B2B leaders navigating a new reality per Episerver’s B2B Digital Experience Report.
The March 2020 survey of 600 global decision-makers in IT, e-commerce and marketing roles at B2B organisations indicates 41% of respondents believe selling directly to customers online is the most significant opportunity for their business in the next year, followed by expanding into new geographies (37%) and providing their salesforce with digital selling tools (36%).
Despite the economic downturn, 85% of B2B organisations still expect their digital experience budget to increase next year, which will help 71% of B2B leaders who agree that the digital experience their company offers does not meet the needs and expectations of its customers.
“It is clear from our data and conversations with customers that digital transformation is being accelerated to address immediate needs due to COVID-19,” said Alex Atzberger, CEO of Episerver. “Direct-to-consumer sales, for example, have been discussed for years, but now the time is there to rethink your go-to-market channel. Getting in touch with customers directly and in a hyper-relevant way is business critical when in-person tactics are impossible to execute. You can’t not be digital anymore; you can’t not create content to create engaging experiences; you can’t not sell directly.”
The survey also discovered that many B2B organisations are struggling to meet customer expectations and are faced with an arduous competitor. Fifty-two percent of B2B leaders believe their company is losing revenue to Amazon, but despite those perceived losses, 52% of B2B leaders also say Amazon is seen more as an opportunity than a threat.
While technologies offer a potential solution to today’s challenges, anxieties remain around the impact of AI and automation on future job security according to Episerver’s second-annual B2B Digital Experience Report.
In today’s world, businesses are met with the challenge of deciding what products, services and technologies their companies should invest in and how they can adjust their budget to be better positioned for the current recession.
Research into corporate sustainability practices reveals the role global organizations play in damaging natural environments.
Research published by Blancco Technology Group (LON: BLTG), the industry standard in data erasure and mobile device diagnostics, explores the issues associated with the corporate sustainability practices that some of the world’s largest enterprises are following today. Blancco’s study, Poor sustainability practices – enterprises are overlooking the e-waste problem, produced in partnership with Coleman Parkes, reveals that only a quarter (24 percent) of end-of-life equipment is being sanitized and reused, despite 83 percent of organizations having a Corporate Social Responsibility (CSR) policy in place.
Despite the media conversation around climate change ramping up following global fires and record-high temperatures in Antarctica – and the topic taking centre stage at events like Davos - enterprises are not paying due attention to their contribution to this urgent, global issue. What’s more, Blancco’s study shows that enterprises’ current sustainability practices that form part of their CSR policies are not being employed. This is driving two main issues:
·A surge in e-Waste – Over a third (39 percent) of organizations physically destroy end-of-life IT equipment because they believe it is “better for the environment.” Physically destroying IT assets, when accompanied by a certificate of destruction and a full audit trail, is a valid data disposal option when hardware has reached end-of-life. However, if electronics are improperly disposed of and end up in landfill, the toxic or hazardous materials they contain, such as mercury and lead, can be harmful to the environment, and anyone who is exposed to them.
·Cyber landfill – There are more than 34 billion IT devices in the world today, generating 2.5 quintillion bytes of data daily. According to research from Hewlett Packard Enterprise, only about 6 percent of all data ever created is currently in use, which means 94 percent is sitting in a vast “cyber landfill.” Organizations around the world are therefore sitting on vast amounts of redundant, obsolete or trivial (ROT) data they don’t need and that are consuming valuable energy resources.
So why are so many organizations choosing to physically destroy equipment or keeping unnecessary data in active corporate environments? The answer lies in three major areas.
In addition to a lack of education, with organizations believing that physically destroying unfunctional or end-of-life equipment is “better for the environment”, the study highlights a clear lack of ownership and communication. Dealing with end-of-life equipment is part of the majority of organizations’ CSR policy (91 percent) but this isn’t being communicated or properly enacted across the business.
The robustness and lack of regulations globally also plays a critical role. In the U.S. alone, 22 states don’t have statewide e-waste laws. And despite the existence of the EU’s WEEE Directive and WEEE Regulations (2013), the U.K. missed its targets in 2018 and is one of the worst offenders for exporting waste to developing countries. Radical action and more robust regulations are needed.
“In today’s global climate, sustainability should be at the heart of every business’ strategy,” said Fredrik Forslund, Vice President, Enterprise and Cloud Erasure Solutions at Blancco. “Yet, it’s clear from our research that organizations globally are not doing enough. By managing retired IT assets in a more environmentally friendly way, putting them back into the circular economy and removing unnecessary data in active environments – should be best practice for all organizations. Furthermore, by actively looking at the data they hold as part of their data lifecycle management initiatives and regularly and securely removing the data they no longer need, organizations will not only reduce their energy consumption – but also remain compliant.”
Advanced technologies such as AI, blockchain and continuous authentication to transform the connected era in 2030.
Frost & Sullivan’s recent analysis, The Future of Privacy and Cybersecurity, Forecast to 2030, finds that by 2030, there will be a complex global network of 200 billion devices, with over 20 connected devices per human. As the Internet of Things landscape is expected to progressively expand beyond the traditional network in use today, there will be an increase in the complexity of privacy and cybersecurity challenges. Consequently, the market will experience deeper synergies among data protection, security, privacy, and public good as more international frameworks are developed to govern the internet.
"Artificial Intelligence (AI) will emerge as the new frontier of privacy and cybersecurity as enterprises explore new opportunities and train a capable workforce to identify critical threats, respond faster to breaches, and learn from them,” said Vinay Venkatesan, Visionary Innovation Research Consultant. "In addition to AI, data de-identification, advanced authentication and encryption, biometrics, Blockchain, automation, and quantum computing also will have the potential to transform privacy and cybersecurity."
Venkatesan added: "There will be more than 26 smart cities by 2025, most of them in North America and Europe. Additionally, boundaries between work and home continue to blur, as we’re experiencing right now. This means every connected device in a smart home, enterprise or city will be a potential access point to our most sensitive and personal data, making mass non-consensual data collection feasible and cybersecurity all the more vital."
F5’s State of Application Services Report showed that 88% of surveyed EMEA organisations are benefiting from multi-cloud environments.
According to the sixth annual State of Application Services (SOAS) report, 88% of surveyed EMEA organisations were leveraging multi-cloud environments, compared to 87% in the Americas and 86% in the APCJ region.
27% of EMEA respondents also claimed they will have more than half of their applications in the cloud by the end of 2020. Meanwhile, 54% agreed that cloud in all its forms is the top strategic trend for the next two to five years
The SOAS report goes on to note that EMEA organisations were more likely than any other region to choose cloud platforms that support applications on a case-by-case basis, with 43% opting for the increasingly popular approach (compared to 42% worldwide). This chimes with the fact that 70% stated that it is “very important” to be able to deploy and enforce the same security policies on-premises and in the cloud. In the Americas 69% of respondents concurred, with APCJ slightly behind on 65%.
“Inflexible, one-size-fits-all solutions won’t work anymore in the cloud, so it is encouraging to see that per-application strategies are becoming more widespread in EMEA,” said Brett Ley, Senior EMEA Cloud Director, F5 Networks.
“Every application is unique and serves a specific function, such as finance, sales, or production. Each will have end users that scale from less than a hundred to into the millions. And each has a different risk exposure that can span from a breach being simply embarrassing to costing the business billions of dollars’ worth of damage.”
Enduring challenges and security concerns
33% of EMEA organisations cited regulatory compliance as the biggest challenge when managing applications in multi-cloud environments, which was once again higher than any other region, and partly due to complexities stemming from the EU General Data Protection Regulation (GDPR).
Other pressing concerns included applying consistent security policies across all applications (30%), safeguarding against emerging threats (28%), and migrating applications between clouds and data centres (28%).
When it comes to security postures, respondents reported lower confidence levels in their ability to withstand an application-layer attack in the public cloud (only 15% were “very confident” they could do so), versus in an on-premises data centre (30%) or via colocation deployments (20%).
The cloud security challenge is further exacerbated by a growing industry skill gap: as many as 66% of EMEA organisations believe they lack necessary security talent going forward. America is closely behind with 65% having claimed the same. The problem was most pronounced in the APJC region where it was an issue for 76%.
Despite EMEA‘s proactive embrace of per-app cloud strategies, the SOAS report found that many still struggle to provide security parity across all application environments.
It is an ongoing issue across the world and is complicated by the sheer diversity of the average application portfolio. No generation of technology is currently leading the way. According to SOAS respondents from all surveyed regions, client-server architecture remains the most prominent and accommodates around a third (34%) of applications, with three-tier web applications coming in second at 26%. Newer mobile (14%) and microservices/cloud-native (15%) architectures are on the rise, but old school mainframe/monoliths still account for 11%.
“A heterogeneous mix of application architectures is currently the norm and highlights the fact that multi-cloud deployments are very much here to stay,” added Ley.
“It is important to realise that the notion of achieving a single application architecture or uniform infrastructure environment is a pipedream for most organisations of scale. As such, it is imperative to have application services that span multiple architectures and multiple infrastructures. This will ensure consistent - and cost-effective - performance, security, and operability across the entire application portfolio.”
Although nearly half (45%) of IT decision-makers around the world are planning to move to a multicloud architecture, fewer than 1 in 5 businesses are currently deploying across multiple clouds.
Equinix has published the findings of a global survey exploring IT decision-makers’ insights into the biggest technology trends shaping the worldwide economy. The results of the study—which gathered responses from nearly 2,500 participants from 23 countries in the Americas, EMEA and Asia-Pacific—show companies were already preparing for a more connected world, ahead of the dynamically changing environment triggered by COVID-19.
Findings revealed there are significant ambitions by businesses to embrace a multicloud approach, but that adoption is still at less than 20% worldwide, and at 14% in the UK. Globally, one in two (50%) IT leaders, compared to just below half (45%) of respondents in the UK, state they are prioritising moving their infrastructure to the digital edge—where population centres, commerce, and digital and business ecosystems meet and interact in real time—as part of their organisation’s overarching technology strategy.
Nearly three-quarters (71%) of global respondents said they plan to move more of their IT functions to the cloud, compared to 68% in the UK. Two-thirds (66% globally, 59% in the UK) of these plan on doing so within the next 12 months, despite nearly half (49% globally, 43% in the UK) of respondents still seeing perceived cybersecurity risks around cloud adoption as posing a threat to their business.
Cloud strategies considered include a dispersed multicloud approach where a single company will use different cloud providers for different functions. This is a major trend emerging in the marketplace and corroborated by the study. 45% of global IT leaders, compared to36% in the UK, say their technology strategy includes moving to a multicloud approach, which will have significant implications for the industry as businesses continue to diversify their portfolio of cloud providers. But while there is clearly a strategic shift underway, multicloud adoption is far from ubiquitous: fewer than one in five (17% globally, 14% in the UK) IT decision-makers said their business is currently deploying across multiple clouds. Hybrid cloud deployments—whereby companies use a combination of one or more public cloud providers with a private cloud platform or IT infrastructure—are more commonplace, with 34% of IT decision-makers globally already having hybrid strategies in place. This figure drops significantly in the UK to 22%.
To cater to the rapid adoption of hybrid multicloud solutions, Equinix recently announced it has acquired bare metal automation platform leader Packet. Coupled with Equinix’s flagship cloud connectivity platform, Equinix Cloud Exchange Fabric™ (ECX Fabric™), which supports hybrid multicloud strategies by directly, securely and dynamically connecting distributed infrastructure and digital ecosystems globally, the service enables companies to bypass the public internet and make the move to the digital edge—whilst avoiding unnecessary cybersecurity risk.
Implementing an interconnected fabric of network and cloud hubs at the digital edge in this way simplifies the complexity of hybrid IT and provides the choice, scale and agility required for current and future digital business requirements. By providing this critical infrastructure in 55 markets across the world, Equinix is ensuring its customers are better equipped to securely reach everywhere, interconnect everyone and integrate everything that matters to their business.
Survey highlights
Cybersecurity must be considered at all points of a cloud migration.
Trend Micro has released the findings from research into cloud security, which highlights human error and complex deployments open the door to a wide range of cyber threats.
Gartner predicts that by 2021, over 75% of midsize and large organizations will have adopted multi-cloud or hybrid IT strategy. As cloud platforms become more prevalent, IT and DevOps teams face additional concerns and uncertainties related to securing their cloud instances.
This newly released report reaffirms that misconfigurations are the primary cause of cloud security issues. In fact, Trend Micro Cloud One – Conformity identifies 230 million misconfigurations on average each day, proving this risk is prevalent and widespread.
“Cloud-based operations have become the rule rather than the exception, and cybercriminals have adapted to capitalize on misconfigured or mismanaged cloud environments,” said Greg Young, vice president of cybersecurity for Trend Micro. “We believe migrating to the cloud can be the best way to fix security problems by redefining the corporate IT perimeter and endpoints. However, that can only happen if organizations follow the shared responsibility model for cloud security. Taking ownership of cloud data is paramount to its protection, and we’re here to help businesses succeed in that process.”
The research found threats and security weaknesses in several key areas of cloud-based computing, which can put credentials and company secrets at risk. Criminals capitalizing on misconfigurations have targeted companies with ransomware, cryptomining, e-skimming and data exfiltration.
Misleading online tutorials compounded the risk for some businesses leading to mismanaged cloud credentials and certificates. IT teams can take advantage of cloud native tools to help mitigate these risks, but they should not rely solely on these tools, the report concludes.
Trend Micro recommends several best practices to help secure cloud deployments:
RWE Supply & Trading integrates data centres into the energy transition.
Batteries are of central importance for the energy transition. They can help to cushion the fluctuating feed-in of renewable energy. To this end, many solutions are already being worked on in engineering and network technology. Help is now coming from an unexpected source: data centres. They store and process huge amounts of data, but so far they have only been considered large consumers of electricity. To be on the safe side, however, they have uninterruptible power supply (UPS) systems and emergency power generators available around the clock. These systems are rarely used.
Thanks to the "Master+" solution developed by RWE Supply & Trading and Riello Power Systems, data centres can now contribute to the energy transition: their UPS battery systems help to stabilise the grid. "Master+" features a premium battery with increased storage capacity and an integrated battery monitoring system. The system can automatically draw power from the grid or supply power to the grid in the event of grid imbalances. In addition, RWE has developed a service with which the emergency power generators can significantly relieve the load on the power grid by means of a few targeted operations. Both the UPS batteries and the emergency power generator are marketed with the support of RWE.
The Kraftwerks-Simulator-Gesellschaft mbH (KSG) is the first customer worldwide to use RWE’s UPS battery solution "Master+" and is also providing its emergency power generator for network services.
The cooperation with RWE offers many advantages for KSG
Dr. Burkhard Holl, Head of Engineering and Operations, summarises the benefits for KSG: "With “Master+” and the marketing of our emergency power generator, we can benefit from the energy market: we have a higher storage capacity and our battery storage is monitored around the clock. This means greater security of supply while at the same time generating additional revenues - a solution that is both profitable and resource-efficient for us." Another advantage is that the emergency power generator is not used much more frequently than usual, even with RWE´s remote control system.
"Put simply, we cycle the test runs of the emergency power generators, which take place anyway, in exactly the same way so that the networks are relieved," explains Claudius Beermann, the responsible product manager at RWE. "In this way, we replace wear-promoting test runs at low load with targeted operations under high load.“
Due to its high degree of innovation, "Master+" was awarded the German Data Center Prize in the category "Energy Technology" in 2018 and the eco://award in the category "Datacenter Infrastructure" in 2019.
Expansion of the data centre meets the requirements of European standards
The expansion of KSG's data centre is now being gradually expanded: in the first stage of expansion, which has already been implemented, two UPS battery solutions, each with 250 kilowatts of “Master+” type power, and a 1,100 kW emergency power generator to secure the emergency power supply were installed. The next step is to increase the UPS output to 2 megawatts and to add a second emergency power generator. The certification of the KSG computer centre based on the TSI.STANDARD including the DIN EN 50600 specific extensions will be carried out by TÜViT. The overall concept of the site is currently undergoing conformity testing for Level 3, which stands for a highly available data centre. The catalogue of criteria also defines requirements for the planning of the building construction, energy supply and security systems trades and specifies criteria for the operation of computer centres.
Tanium has unveiled new global research ahead of the second anniversary of the European Union’s General Data Protection Regulation (GDPR). The research shows misalignment between data privacy regulation spending and business outcomes. Specifically, as businesses spend tens of millions on compliance, 93 percent have fundamental IT weaknesses that leave them vulnerable and potentially non-compliant.
The global study of 750 IT decision makers revealed that British organisations have spent on average £53.5 million each to comply with the GDPR, the California Consumer Privacy Act (CCPA), and other data privacy regulations over the past year. Most organisations have hired new talent (83 percent), invested in workforce training (88 percent) and introduced new software or services (76 percent) to ensure continued compliance. In addition, 89 percent of organisations have set aside or increased their cyber liability insurance by an average of £117 million each, to deal with the potential consequences of a data breach.
However, despite this increased investment, organisations still feel unprepared to deal with the evolving regulatory landscape, with over a third (36 percent) claiming that a lack of visibility and control of endpoints[1] is the biggest barrier to maintaining compliance with regulations such as GDPR.
Increased spending not solving visibility challenges
This lack of visibility into how organisations see and manage endpoints such as laptops, servers, virtual machines, containers and cloud infrastructure causes major challenges. In fact, the study revealed major visibility gaps in the IT environment of most organisations prior to the pandemic. Ninety three percent of IT decision makers have discovered unknown endpoints within their IT environment, and 71 percent of global CIOs said they discover new endpoints on a weekly basis.
Mass home working and employee use of personal devices is likely to exacerbate these problems further, expanding the corporate attack surface. When compliance relies on understanding what tools you use, what endpoints you have and what data you hold across the entire organisation – these visibility gaps are potentially dangerous.
Chris Hodson, Chief Information Security Officer at Tanium said, “While it’s encouraging to see global businesses investing to stay on the right side of data privacy regulations, our research suggests that their good work could be undermined by inattention to basic IT principles. Many organisations seem to have fallen into the trap of thinking that spending a considerable amount of money on GDPR and CCPA is enough to ensure compliance. Yet without true visibility and control of their IT assets, they’re leaving a backdoor open to malicious actors.”
What is causing visibility gaps?
The majority (93 percent) of respondents acknowledged fundamental weak points within their organisations that are preventing a comprehensive view of their IT estate.
These visibility gaps are being exacerbated by the following:
The research found that UK firms have implemented an average of 41 separate security and operations tools to manage their IT environments. Tool sprawl like this further limits the effectiveness of siloed and distributed teams, adding unnecessary complexity.
Tech leaders are concerned about the consequences
In the study, IT leaders cited concerns that limited visibility of endpoints could leave their company more vulnerable to cyberattacks (57 percent), damage the brand reputation (42 percent), make risk assessments harder (36 percent), impact customer churn (27 percent) and lead to non-compliance fines (34 percent).
Respondents also revealed a false sense of confidence when it came to compliance readiness. Ninety four percent of IT decision makers said they were confident of being able to report all required breach information to regulators within 72 hours. But with nearly half (46 percent) reporting they have challenges in getting visibility into devices on their network, this confidence appears to be misplaced — a single missed endpoint could be a compliance violation waiting to happen.
Chris Hodson, Chief Information Security Officer at Tanium concluded: “GDPR and CCPA represent the beginning of a complex new era of rigorous data privacy regulations. Although some regulators have postponed large fines due to the current pandemic, it doesn’t defer the requirement for companies to ensure personal information is stored and processed using the strictest safeguards.
“Technology leaders need to focus on the fundamentals of unified endpoint management and security to drive rapid incident response and improved decision making. The first step must be gaining real-time visibility of these endpoints, which is a crucial prerequisite to improved IT hygiene, effective risk management, and regulatory compliance. With most teams working from home these days and many having to use their own devices, this has never been more important.”
According to Atlas VPN estimations, damages caused by cybercrime are expected to reach more than $27 billion by 2025.
In 2019, authorities received a record-breaking 467,361 Internet-facilitated fraud complaints, with accumulated losses exceeding $3.5 billion. Yet, in 2020, lockdowns might act as a catalyst for the biggest hacker attack outbreak to date.
Atlas VPN projects that in 2020, there will be a 45% increase in cybercrime damages in comparison to 2019. This means that in 2020, the estimated monetary losses will reach over $5 billion.
Rachel Welsh, COO of Atlas VPN, shares her thoughts on the future of cybercrime:
“Due to the pandemic, people are stuck at home, surfing the web, and working remotely. We can expect a record number of hacker attacks in 2020 since the pool for potential cybercrime victims has never been larger. Also, the latest cybercrime trends show that hackers are focusing on human layer errors more than ever.”
Cybercrime damages from 2014 to 2019
In 2014, hackers stole over $800 million from unsuspecting victims; by 2019, the number reached $3.5 billion. The monetary damages caused by cybercrime increased over four times in the last six years.
In 2019, digital crimes that caused most financial damages were business email compromise (BEC), romance fraud, and spoofing. Out of these, BEC accounts for over half of the losses in 2019, with a staggering $1.77 billion.
Following on from our highly successful coverage of the healthcare sector in the April issue of Digitalisation World, the May DW includes a major focus on the technology challenges and opportunities as the coronavirus continues to cause major disruption to almost every aspect of our lives. The feature runs throughout the magazine, and includes a range of articles and news updates. Part 1.
IT is renowned – especially by those outside the industry – for being fast-moving and ever-changing.
By Steve Broadhead, Broadband-Testing.
In some ways this is true – product and service updates occur daily, in some cases several times a day. And from time to time the world of IT really does make an exponential leap forward, often due to component costs falling dramatically – just think in terms of storage capacity and memory, for example. But IT is equally cyclical in nature; almost like fashion at times, features and functionality that are seen as “essential” at some point in time are then cast aside for years before taking pride of place in the next era of IT.
Often there is a trigger point that sparks new, or re-invention. Now we have the Covid-19 pandemic forcing the hand of companies to become totally flexible in the way they provide services to their staff and business partners, not least in terms of where those individuals actually work and that means supporting homeworking more than ever before. Personally, the reluctance to move to increased homeworking over the decades has been both surprising and frustrating, given the huge number of benefits it delivers – less traffic and travelling, more productive daytimes, flexibility in combining work and family life, reduced office costs – the list goes on and on… Against that list is primarily the one barrier – the human resistance to such a fundamental change ; lack of trust from the bosses in their staff when not directly under their noses and lack of faith in the individuals themselves to buckle down and work in an environment seen to have many distractions.
But from a technology perspective, there are no issues and no requirement to reinvent IT in order to support a new, and potentially massive wave of home/remote working. I recall using remote control technology in the ‘80s to support remote office and homeworkers, and testing a variety of relevant products from the likes of Richmond Systems in the UK from the late 80s onwards – remote control, helpdesk, asset management, remote network management tools etc – that worked perfectly back then, even over dial-up modem links. At the same time, I started working from home myself, no Internet, no “cloud”, but we did have remote connectivity and services. And they worked.
A recently published article in Digitalisation World “Predicting Life After The Virus” substantiated my own beliefs and desires, stating that “home is where the work will be” and where IT sees “remote work as the norm, not the exception for most businesses”. Meantime, companies such as aforementioned Richmond Systems have not gone away over the decades, but have continued to develop and adapt those products and services that worked back then, to perform better than ever now and take advantage of contemporary technology – high-speed and affordable Internet access to most parts of the world (let alone the UK), cloud-based deployment, support and management, user/customer self-service portals; all elements that allow IT to be a completely distributed model and fully support remote/homeworkers.
The question is, from an IT management perspective, what do you really need in 2020 in order to establish a raft of homeworkers, whether they number dozens or thousands? Moreover, since the changeover is a rapid requirement for many, how do you also keep it simple and easily deployable? However simple and fool-proof your homeworking solution, the reality is that humans tend to panic, so their dependency on technical support and help via remote access will increase, initially at least. And while ever there are some form of centralised offices with workplaces, outgoing access to office-based devices, storage and services is equally important – it is a bidirectional process.
Fundamental tools required to enable this homeworker generation of IT include:
Ø Remote control on demand: allowing remote technical support of any user in any location, including inventory scanning and updating. Important here are speed and security from a connection standpoint, as well as true cross-platform support, thereby allowing secure, remote connectivity to any desktop, server, mobile and embedded devices running Windows, Mac, Android, Windows CE etc. This should increasingly include access to geographically distributed IoT devices such as CCTV, transmitters, healthcare tech and infotainment systems. The worst-case scenario in a support scenario is the inability at a crucial moment to access a given device, due to incompatibilities.
Ø Cloud-based user management: providing access to home or office-based devices/endpoints from any other device – for example, for file transfer, basically from anywhere to anywhere. This includes integration with a service desk solution that allows management of all support activity among the user base. Without a centralised solution – increasingly typically cloud-based – orchestrating all the ongoing remote support work, all you’re going to create is a giant, distributed support headache, in terms of who did what, where, how, and what was the state of resolution?
Ø Self-service portals: providing remote users with a simple and efficient solution for managing their support activity; for example, to get instant answers/remedies, raise support requests, request access permissions and related activities.
The latter is perhaps one of the more overlooked aspects of remote/homeworking, but is a primary method to massively increase productivity – both from a user and support team standpoint. Key here is to build in as much automation as possible, in order to avoid the otherwise inevitable swamping of the support team. Equally, access to data, applications, tools and support needs to be 24x7 – for homeworkers especially, the 9-5 working day rules go out of the proverbial window (which might now look out onto the garden!). Self-service portals allow companies to centralise relevant, up-to-date information and guidance that employees can easily access at any time – so there are no excuses on the “I didn’t know that” front.
As Laurence Coady, CEO of Richmond Systems explained; “customer and user support portals are a powerful tool for managing remote workers as they provide a trusted source of guidance and reduce the risk of well-meant, but often misleading ‘word-of-mouth’ advice from colleagues.
The classic KISS (Keep It Simple Stupid) acronym is never more relevant than when applied to this kind of portal. For years it seems that IT has been trying to develop a successful customer or user-facing element to many of its solutions, but all too often these have simply been too complex for the users to understand. Self-service is all about ease of use and understanding what to do. Anyone who tries to use automated cash-out services in UK supermarket will understand the trials and tribulations involved in perfecting such a system! The ideal solution is a fully customisable, no programming required, portal that is as easy to create as it is to use, based around a series of onscreen tools and a workflow planner – for example like a traditional flowchart. The portal must appear to be familiar from a company user perspective, rather than a bolt-on tool that leaves them uncertain. It is designed to help after all…
Another key element is in making this portal usable on any platform, so it is correctly presented to users, regardless of screen size/device type they are browsing with – a classic failing with many remote IT applications over the years. A combination of wide customisation options, combined with no programming capability requirements means that companies can create genuinely simple to use portals that are properly suited to their business and user base. This equally makes the service portal a perfect tool, for example, for MSPs servicing multiple clients from effectively a single platform – think multi-tenant buildings, for example – but with each client appearing to have their own portal. There should also be analytics benefits here too, in that a portal can record the entire user journey, including any pages visited, controls clicked, and inputs given whilst raising a support request, for example.
Combining reliable and secure remote support access with user self-help functionality goes a long way towards enabling successful, productive, pro-active remote working. And, for anyone who still thinks it’s “something that will occur in the future” then they had better think again – and quickly! The working world isn’t changing – it has already changed…
DW talks to Jim Crook, Senior Director of Marketing, CTERA, about the likelihood that the current imperative to embrace remote working will, in fact, become a permanent feature of the enterprise into the future.
1. Please can you provide some background on CTERA?
CTERA helps organizations use cloud to replace legacy file storage systems without compromising security or performance.
Our technology provides a hybrid connection for remote sites and users to a global file system powered by any public or private cloud infrastructure. Customers gain new levels of multi-site productivity and centralized data management while keeping costs under control.
Today our global file system powers more than 50,000 enterprise locations and millions of corporate devices in 110 countries. We’re proud to count McDonald’s, Humana, WPP, the U.S. Department of Defense, and many other leading organizations as customers.
2. COVID-19 is top of mind for organizations everywhere. What are you seeing from the CTERA POV?
Obviously, this is an event of historic magnitude. The short-term implications create a serious global economic challenge, especially in directly affected industries. As the realization grows that the current situation could span several months at least, we are now receiving inquiries from enterprise customers looking to rapidly expand remote working option to all their employees, and to provide better independence for remote site workers who cannot come to the main HQ.
For CTERA as a high-tech company, remote work is part of our DNA. All our employees have laptop and VPN access and can serve customers from anywhere. We also use our own products, which provide fast and secure data access to all our global employees and branches.
In a longer-term perspective, this crisis, unfortunate as it is, has the potentially to dramatically accelerate industry’s approach to remote work and cloud services, which will contribute to less traffic congestion, less air pollution and better health and life quality.
3. You mentioned remote work, which has become a huge area of focus for enterprises. How are you helping customers with this shift?
We’ve always believed the future to be remote and the COVID pandemic has proven that to be true. Even before the coronavirus hit, the trend of remote work was emerging, and now businesses have begun to realize and embrace the fact that a remote workforce is a positive. Once we are post-coronavirus, remote work won’t go away. Many workers will continue working from home. Smaller offices and travel restrictions will be commonplace; employees will be further divided into silos; and organizations will seek to improve their readiness for disasters and pandemics.
To that end, our focus is helping customers close the gaps in their remote work/WFH strategies that have been revealed by COVID-19. We help them extend their corporate file system to remote users without breaking file access protocols, changing file structures, or compromising security.
4. How does your approach differ from other remote work strategies?
The general approach to remote work, especially in traditional industries in which cloud adoption is not prevalent, has been to set up VPNs for office users to connect to an office file server and then to hope the end user has strong enough connectivity to ensure files can be accessed quickly through a web browser. Of course, this tends to be a clunky, inefficient process that slows user productivity and is generally not sustainable for extended periods, such as the weeks/months spent working from home in a global pandemic.
But going all-in on a SaaS or cloud-based alternative also doesn’t make sense for these companies because of the significant change required, such as user access experience/protocols, restructuring of file systems, security practices, etc. Especially when the organization wants to return to their normal ways of working at the office when the crisis abates.
Organizations are finding it more productive and cost-efficient to use cloud technology to extend traditional file infrastructure capabilities by replicating office data to the cloud and enabling WFH users to access the corporate file system from their usual network file sharing protocols. This is the basic CTERA solution architecture and it will become increasingly useful in the post-COVID era, where remote work perhaps will be the norm for many businesses.
5. Which CTERA products / services are seeing the most interest from customers in this new environment?
The unified architecture of our global file system replicates file data across branch filers, remote desktops, and mobile and VDI clients. This means we can support a wide variety of file services use cases, and therefore our customers are well-positioned to securely extend corporate data to users working anywhere.
Currently customers are looking for the most seamless, secure, and quick way to ensure productivity for WFH users. They want as little change to the file services experiences users enjoyed at the office, and they want to maintain the security and performance of the office environment as well. This is the main reason customers and partners find the totality of the CTERA platform to be the right fit.
Without any new infrastructure investment, organizations can enable secure file sharing and collaboration and data protection through our endpoint client or build the CTERA global file system into their VDI deployments. Both paths enable WFH users to have an office-like experience with full synchronization to corporate data. We call it Remotifying Your IT.
6. Do you think it will be possible to quickly return to the pre-crisis level of activity once the containment measures are lifted? Or do you think it will be longer?
While we believe remote work will remain in force long after the health crisis subsides, that doesn’t mean user productivity levels cannot return to what they once were. In fact, the adoption of remote work models provides huge advantages in terms of business continuity, productivity, and operational efficiency. We’re seeing it in our customers, in our partners, and in our own organization.
But most organizations, including those in traditional industries, will need to quickly learn how to enable their remote workforce from the current level of around 20% to over 90%. As such, we are focusing much of our efforts on assisting customers with IT Remotification and ensuring they are ready to meet the challenges of remote workforce enablement today and tomorrow.
Following on from our highly successful coverage of the healthcare sector in the April issue of Digitalisation World, the May DW includes a major focus on the technology challenges and opportunities as the coronavirus continues to cause major disruption to almost every aspect of our lives. The feature runs throughout the magazine, and includes a range of articles and news updates. Part 2.
As the COVID-19 pandemic continues to impact millions of people worldwide, Lawrence Livermore National Laboratory (LLNL) and its industry partners are committed to applying the nation’s most powerful supercomputers and expertise in computational modeling and data science to battling the deadly disease.
To assist in this effort, the Laboratory, Penguin Computing and AMD have reached an agreement to upgrade the Lab’s unclassified, Penguin Computing-built Corona high performance computing (HPC) cluster with an in-kind contribution of cutting-edge AMD Instinct™ accelerators, expected to nearly double the peak performance of the machine.
Under the agreement, AMD will supply its Radeon Instinct MI50 accelerators for the Corona system, expected to enable it to exceed 4.5 petaFLOPS (floating point operations per second) of peak compute power. The system will be used by the COVID-19 HPC Consortium, a nationwide public-private partnership that is providing free computing time and resources to scientists around the country engaged in the fight against COVID-19, and by LLNL researchers, who are working on discovering potential antibodies and anti-viral compounds for SARS-CoV-2, the virus that causes COVID-19. The Corona system is supported by AMD EPYC CPUs, working side-by-side with AMD Radeon Instinct Accelerators, and uses Penguin Computing’s Tundra Extreme Scale platform.
Delivered to LLNL in 2018 under a contract with Penguin Computing, the Corona system — named for the total solar eclipse of 2017 — is used for unclassified open science applications. The upgrade comes at no cost to the National Nuclear Security Administration (NNSA) but is intended by AMD to support research into COVID-19, while furthering its partnership and collaboration with LLNL in software and tools development. In exchange for the upgraded GPUs, AMD is securing compute cycles that will be used for a variety of purposes, including providing time for LLNL COVID-19 research and proposals approved by the COVID-19 HPC Consortium, as well as supporting development efforts by AMD software engineers and application specialists.
“It is well known that AMD is a key partner in the upcoming delivery of the first NNSA exascale-class system, the Hewlett Packard Enterprise El Capitan supercomputer,” said Michel McCoy, director of LLNL’s Advanced Simulation and Computing program. “But an enduring partnership involves multiple collaborations, in each of which we pursue common goals. We are delighted that AMD made this generous offer, particularly given the need for a determined pace in mitigating and, ultimately, in defeating this pathogen.”
The AMD Instinct MI50 server accelerator is optimized for large-scale deep learning. The AMD accelerators deliver up to 26.5 teraFLOPS of native half-precision or up to 13.3 teraFLOPS of single-precision peak floating-point performance, combined with 32GB of high-bandwidth memory.
“An effective COVID-19 response requires the best and brightest minds working together. By leveraging the massive compute capabilities of the world’s most powerful supercomputers, we can help accelerate critical modeling and research to help fight the virus,” said Forrest Norrod, senior vice president and general manager, AMD Datacenter and Embedded Systems Group. “AMD is proud to assist in that effort with a contribution of processors well-suited for the science now underway through the COVID-19 HPC Consortium and Lawrence Livermore National Laboratory.”
“Penguin Computing is committed to helping in the worldwide research efforts to fight against the COVID-19 virus,” said Sid Mair, president of Penguin Computing. “Increasing the capabilities of Corona for both HPC and AI computing could greatly enhance research into the nature of COVID-19, possible vaccines, treatments and contagion pathways.”
Corona will be one of the most capable of the seven LLNL supercomputers made available to researchers through the COVID-19 HPC Consortium, which involves more than a dozen member institutions in government, industry and academia and is spearheaded by the White House Office of Science and Technology Policy, the U.S. Department of Energy and IBM. The consortium aims to accelerate development of detection methods and treatments for COVID-19. AMD officially joined the consortium on April 6.
AMD software engineers will provide support in porting certain applications critical to the COVID-19 effort to Corona and optimizing the performance of the GPUs on relevant applications.
The Corona system also will aid LLNL researchers in their hunt for potential antibodies and antiviral drugs to combat the virus. COVID-19 has become a top priority for the Corona system, where it is being used to virtually screen, design and validate antibody candidates for SARS-CoV-2 and to simulate the interaction of small molecules with the virus’ proteins to discover possible antiviral compounds. The upgrade will allow LLNL researchers to speed up the modeling of molecular interactions vital to the effort and run a wider and more diverse set of applications on the system.
“The addition of these new state-of-the-art GPUs on Corona will boost the capability of the teams working on COVID-19,” said Jim Brase, LLNL’s deputy associate director for Programs. “It’s going to allow us to go faster, with more throughput. We’ll have more resources, so we can run more cases and potentially get to new designs for both antibodies and small molecules faster, that may lead to better treatments. They’ll also enable some of our new software, both for simulation and machine learning applications, to run more efficiently and better.”
Employing a first-of-its-kind virtual screening platform combining experimental data with machine learning, structural biology, bioinformatic modeling and high-fidelity molecular simulations, a team of LLNL researchers has used the Corona system to evaluate therapeutic antibody designs that could have improved binding interactions with the SARS-CoV-2 antigen protein. The team has narrowed the list of antibody candidates from a nearly infinite set to about 20 possibilities and has begun exploring additional antibody designs. The researchers believe the upgrade will double the number of computationally expensive simulations they are performing, making it more likely they’ll discover an effective antibody design.
LLNL computer scientists and computational biologists also are using the Corona system to examine millions of small molecules that could have anti-viral properties with SARS-CoV-2. Increasing the speed and performance of Corona will allow researchers to perform additional, highly detailed molecular dynamics calculations to better evaluate possible SARS-CoV-2 target sites for small molecule inhibitors that could prevent infection or treat COVID-19.
ExtraHop has issued a report detailing rapid substantial changes in device usage trends as businesses shifted their operations in March due to COVID-19. The report also warns of the security complexity and risks posed by connected devices—both those used by employees at home, and those left idle but connected to the office network.
While there are many lenses through which to explore the ways in which COVID-19 is reshaping business operations, connected devices—including internet of things (IoT) devices—and the ways in which people and organizations interact with them tell a story all their own. Using anonymized, aggregate data from across its global user base, ExtraHop analyzed business-related device activity during a one week period at the end of March 2020. This data was compared to activity from a similar study of the same global user base conducted in November 2019. The results reveal not only patterns that illuminate the state of work during the COVID-19 crisis, but also the long-term security implications of a distributed workforce.
Key findings from the report include:
“The almost overnight shift to remote work required a massive effort just to ensure the availability of applications and critical resources for employees outside the office,” said Sri Sundaralingam, Vice President, Cloud and Security Solutions at ExtraHop. “For many organizations, the management of IoT and other connected devices may have been an afterthought, or at least something they didn’t anticipate having to handle long term. As availability and security issues surrounding remote access become more settled, this needs to be an area of focus.”
Robotic Process Automation (RPA) used to conjure up visions of workplaces made up of a robot workforce. Today, we know that RPA isn’t about a physical presence at all - it’s a clever way of automating tasks, generally the more mundane and repetitive ones, through the use of software technology.
By Roman Mykhailyshyn, Robotic Process Automation Technical Lead at Ciklum.
RPA has been adopted by many industries and sectors, but it is banks, insurance companies and utilities who are perhaps at the forefront of incorporating it into their business processes. Researchers at The London School of Economics found that RPA in the Energy Sector delivered a 200% ROI. In this instance, only 25% of the processes were automated, indicating that if this was increased then productivity and returns could be even higher.
Global insurance company Zurich is a fantastic example of how the implementation of RPA as part of their insurance claims process resulted in the savings of millions of dollars. Organisations which depend on defined processes to operate can use ‘bots’ to carry out some of the more repetitive tasks with greater efficiency, allowing staff to concentrate on higher-value work.
Another example is Walgreens, one of the largest drugstore chains in the world, which adopted RPA in their human resources department. By doing so they identified 2,000 members of staff were on leave of absence each day and were able to optimise the tracking and reporting of paid and unpaid leave, boosting HR efficiency by 73%.
Automation through digital processes can also ensure greater efficiency by reducing the chance of human error. It also increases the speed that tasks are carried out. So no wonder it has been identified by Gartner as the fastest-growing segment of the global enterprise software market. In fact, they estimate that by the end of 2022, 85% of organisations with revenue of more than $1billion will be using a form of RPA in their business.
It’s important to remember that RPA will not work for everyone - or at least not on the same scale. Businesses need to be highly mature ‘organisationally’ operating via defined processes in order to benefit from the cost-saving and time-saving benefits.
How to successfully implement RPA at your company
If you are considering adding the magic powers of RPA to your team and operations, these are the main points to think about:
1. What aspects of the business will benefit from RPA?
Whilst implementing RPA is certainly beneficial for many aspects of your business, it cannot solve all of your business problems.
RPA delivers the best results for organisations which are structured in their approach using standard operating procedures (SOPs) within each department, and having processes in place to track both successes and mistakes. There is little value in automating ineffective processes.
The next step is to review your business goals and analyse whether implementing RPA can contribute towards achieving these.
For example, if your business is struggling with a perception that it is losing touch with new technology, launching a business transformation programme implementing innovative technology may be a solution. RPA can deliver immediate productivity benefits, whilst other innovative programs may take longer to pay off, meaning quick proven results.
2. Address employment concerns directly
Your staff might feel threatened by the launch of an automation project. There will be worries over job security and the concern that they will be ‘replaced by robots’.
But I like to think humans & RPA bots complement each other. While an RPA bot handles all routine, mundane and repetitive tasks, a human can focus on what they are still better at than machines - strategic initiatives (how to improve processes and open a new source of revenue) and building stronger relationships with clients and internal stakeholders.
It’s also worth considering whether staff can be retrained or have transferable skills which could be used elsewhere within the business. Employee loyalty and adherence to company values and policies are worthwhile qualities that are worth holding on to wherever possible.
3. Adopt an 80/20 analysis as a focus tool
RPA can improve a variety of corporate functions such as finance, human resources, supply chain or procurement, to name a few. A good place to start when implementing RPA is by adopting the 80/20 concept as part of your project planning.
Begin by asking yourself which significant department or function is 80% driven by repetitive processes? Then identify within that department where 20% of tasks have the most repetitive process and make this your focus.
For example, if the finance department has been identified perhaps consider automating invoice processing.
4. Analyse and track your outcome
This might sound pretty obvious, but a completely hands-off automation is not here yet so it is important to monitor work undertaken to minimise any unpleasant surprises. When implementing RPA, the general rules apply more than ever - think clear planning and KPI based goal setting. Introduce management review monthly reports focussing on both quantitative data points (e.g. a number of tickets or issues solved) and qualitative elements (e.g. comments from customers and employees using the system) analysing and tracking outcome.
This will allow any issues to be quickly resolved and processes to be improved as and when required.
5. Manage the implementation as a project
Introducing innovative technology will never be challenge-free. Consider starting with a small and agile pilot project in one department such as human resources, to monitor the automation process, allowing any issues to be quickly identified and resolved, then slowly ‘expand’. Start with a task such as XXX before taking on more complex tasks.
It may be worth considering appointing a project manager to oversee the implementation, especially when it comes to larger organisations. This will, in turn, reassure stakeholders by having an experienced and qualified person on board, as well as helping to eliminate any unexpected surprises.
6. Seek external support
Introducing new technology will present a challenge to staff, particularly when it has not been used before. By gaining external support through training, consultation or full project implementation provided by experts like Ciklum will help ensure the easy transition to the adoption of RPA in your business. Ciklum supports clients through all phases of their RPA journey starting from business case analysis & PoC up to deployment bot solution & production support.
In an effort to get from point A to point B as quickly as possible, many companies jump into automation without considering the bigger picture. They adopt one tool to solve a problem, and then another one to handle a different set of challenges. The end result is that organizations use multiple tools and technologies – many of which don’t co-operate and collaborate with each other – to handle different parts of a larger process.
By Chris Huff, CSO, Kofax.
What else causes friction in the business journey? Processes that exist but haven’t yet been automated. When asked about the untapped opportunity within their organisations, more than three-fourths of senior executives said 60 percent or more of process work could be automated while nearly one in five said 80 percent or more, according to a Forbes Global Insight survey.
Clearly, there’s still much work to be done to connect the dots and cobbling together solutions won’t cut it. That’s why a platform-centric approach is best. It eliminates the need to have humans fill the gaps in processes and saves the time and headaches associated with making multiple, disparate tools work together. What’s more, the flexibility of a single platform built on complementary technologies allows businesses to be more agile and meet the demands of customers, employees and suppliers today, next week and next year.
How to optimize the business journey
Of course, multiple technologies will always be needed to handle different parts of the business journey. It’d be great if this could be achieved with a single automation tool, but that’s not the case. For example, executives and managers need access to advanced analytics, while back office workers in accounting and finance often benefit the most from Robotic Process Automation (RPA) and cognitive capture. Just about everyone wants support for mobile (including external vendors and customers).
A KPMG and HFS Research survey found companies are investing in a broad array of these intelligent automation capabilities, but only about 10 percent say they’re leveraging an integrated solution approach. Yet it’s a platform-centric approach which enables organisations finally to stop cobbling together solutions and close the gaps.
A combination of “smart technologies” – such as RPA, cognitive computing, process orchestration, mobility and engagement – designed to work together and integrated on a single platform, will transform your business, making it more agile and competitive. The following steps will help you make a smooth transition to a platform-centric approach and achieve end-to-end automation faster.
Step 1: Build a digital workforce
Repetitive, manual tasks bog employees down as they navigate between systems and copy-and-paste data. It’s inefficient and not a smart use of employees’ time. RPA uses software robots to automate manual, data-driven activities. These bots can easily integrate data from internal and external systems – including websites, portals and enterprise applications without the need for coding or months of development time. You can deploy digital workers quickly and scale as needed, freeing up your human workers to focus on strategic tasks that add more value to the business. The right balance of digital and human workers gives your business the edge required to stay competitive and stand out in the marketplace.
Step 2: Put documents to work
Processing documents and electronic data is another bottleneck in the digital transformation journey for many businesses. Cognitive capture, however, transforms how documents and electronic data are captured and processed. Multichannel document capture is combined with optical character recognition (OCR), delivering cognitive document automation (CDA) technology which enables organisations to quickly process documents, images and unstructured data. Artificial intelligence takes it to the next level, using machine learning and natural language processing to identify, classify and extract content and data from documents and records. Employees know exactly what a given document’s about and what information it contains, so they know what the next appropriate step is in the process. The application of these additional intelligent technologies also makes it possible to integrate CDA with downstream processes and make connections with other internal systems, such as CRM applications – all of which contribute to complete end-to-end automation.
Step 3: Cut out the middleman
It’s not enough to automate simple tasks. Truly successful organisations take automation to the next level, simplifying inefficient processes and creating a streamlined workflow that benefits employees, vendors and customers. You can apply process orchestration to a range of functions across the enterprise – including supplier and customer self-service, claims processing, compliance and regulation checks and customer onboarding.
Capabilities such as omnichannel document capture and extraction, mobile access, workflow automation and optimization can improve employee productivity, increase operational efficiency and lower costs. Meanwhile, integration options and analytics can accelerate the customer journey and help you uncover new opportunities.
Step 4: Optimize the customer experience
Your customers want interaction with your business to be easy and on their terms. Mobile technology allows companies to engage with customers in their preferred communications channels and eliminates the need to enter information manually. Businesses can also leverage advanced analytics to optimize the customer experience and resolve issues faster at every stage of the process. A powerful mobile solution includes support for identity verification, document gathering, personalized omni-channel communications and e-signature within a secure end-to-end mobile experience.
Step 5: Infuse decisions with insights
With real-time information, you can make smart decisions and drive the business forward. Advanced analytics monitor and analyse information across business processes and systems. You’ll gain an accurate and comprehensive view of how the business is doing – and where it needs to adapt to the changing marketplace. Customer service and sales reps have the information they need to hold more engaging conversations with customers. Real-time data means problems are identified and addressed immediately, before they get out of hand.
Work like tomorrow
To work like the digitally enabled company of tomorrow, organisations need processes – not just tasks – to be seamlessly integrated and automated end-to-end. A platform-based approach eliminates the need to have humans fill the gaps in processes and saves the time and headaches associated with making multiple, disparate tools work together.
Ultimately, companies can achieve their digital transformation objectives the easy or hard way. Businesses can continue to cobble together solutions – or they can be more agile, dramatically accelerate time-to-value and improve ROI with a platform-based approach.
Many organisations have automated knowledge work processes powered by a range of technologies including Robotic Process Automation (RPA) and Artificial Intelligence (AI) and this will only continue to grow.
By James Ewing, Regional Director UK & Ireland at Digital Workforce.
In fact, within the next two years, Gartner predicts that the majority (72%) of global organisations will have adopted an automation strategy. Increasing the level of automation in the organisation not only frees up workers time for more valuable tasks under normal circumstances but also helps mitigate the risk of business disruption in exceptional situations.
As an increasing number of companies embark on their automation journey, business leaders and IT teams alike must ensure the right automation strategy is in place. We live in a rapidly evolving digital environment so to fully reap the benefits of automation; organisations must understand how to ensure business continuity when things change or go wrong.
Start simple
Businesses around the world understand the power and potential of RPA. According to McKinsey, 88% of businesses want to implement more robotic automation but often don’t know where to start, particularly when working out which processes should be automated first. When considering where to begin their automation strategy, companies often lean on the most complex and business critical tasks – but this is the wrong approach to take.
By jumping straight in and automating business critical processes, organisations risk crumbling under pressure and their automation strategy could fail before it has even started to deliver any business benefits. Instead, businesses should look to automate processes that are low in complexity and carry minimal risk to business continuity if things go wrong.
Organisations looking to implement RPA or other automation technologies should have a thorough understanding of the business processes and their degree of criticality. Failing to plan for business continuity under changing circumstances is likely to lead to implementation problems, inflated expenses and process failures.
Analyse the business impact
The next step to ensuring business continuity is conducting business impact analyses (BIA) to establish response priorities and understand the risks to technology downtime. Companies need to consider whether their automated processes require seasonal scaling and changing in relation to their long-term business objectives and KPIs.
For instance, when we look at the healthcare sector, hospitals see an increased flow of patients during the winter months. As such, during this time, workers will be seeking methods to speed up the time they spend carrying out administrative responsibilities and mundane tasks in order to spend more time on the front-line helping patients - with the help of digital workers, knowledge based tasks could be handled quicker and more efficiently, freeing up human time to take on other responsibilities.
Organisations should also identify potential risks to running automated processes such as annual leave, sick leave and employee departure to mitigate the impact of these circumstances as much as possible. Not only can downtime impact the revenue of a business but also damage the company’s reputation or worse, lead to consequences such as losing a customer, sanctions for failing to keep to service terms and delays to production.
An automation strategy must be aligned with an organisation’s business objectives. Early on, companies should establish exactly what they wish to achieve through their automation solution as well as the risks and costs related to slow RPA development and process failures.
The best way to tackle these challenges is to have a proactive maintenance approach that ensures robots are executing tasks that are running smoothly 24/7. Not only will this resolve issues quickly as and when they arise but also continue to improve the solution and how it executes the tasks at hand. Without effective maintenance, a business’ dream of achieving objectives and boosting efficiency and productivity can quickly slip away.
The value of outsourcing
One way businesses can solve the challenge of downtime is outsourcing the maintenance of their automated operations to third party specialists. This particularly applies when companies are looking to scale their automation strategy further in the business. For instance, if an organisation is looking to grow their automation projects at a per year rate of growth, this will require an internal organisational shift from one to five full-time roles. Businesses often don’t have the resources to dedicate their workforce to full time automation maintenance roles, and other challenges such as training, recruitment, 24/7 rotation, sick leave and annual leave also come into play.
Unfortunately, the demand often leads to a situation where maintaining business-critical processes is considered too difficult or too expensive to arrange. As such, automation is limited to less-critical operations. The inability to dedicate the required resources to maintenance may also mean that processes don’t run in an optimal way or that RPA licenses are underused.
Outsourcing the maintenance of automated solutions can mitigate the risk of continuity failure, allowing the internal workforce to focus on their day-to-day roles. When maintenance is outsourced, the cost of the service normally reflects the complexity of the process and the necessary resolution time.
Third party maintenance also overcomes the challenge of RPA developers getting caught up in maintenance tasks. Typically, developers don’t like maintaining things, instead they like making new things. If talented developers in a hot employment market are spending a significant time on maintenance, organisations risk losing them to more interesting roles. An outsourced maintenance strategy means the responsibility is abstracted from the development team, ultimately de-risking the potential of losing valuable internal knowledge.
As maintenance requirements grow, outsourcing capabilities means organisations will not have to invest in skilled individuals who are available around the clock. Even if organisations only have small volumes of automation, companies can optimise the process uptime by running updates and reports at night with maintenance specialists readily available to respond to any arising issues at any given time.
The capabilities of RPA are endless. If digital workers are trained to take on some human responsibility, businesses can run their operations more efficiently and effectively than ever before. However, a virtual workforce only creates value when the robots are working as intended and new automations can be deployed quickly. Organisations must test their existing automation solution, consider whether automated processes will be able to run 24/7 and if automated recovery systems are in place. Only then can companies prevent productivity disruption, ensure business continuity and know that automation will soften the blow of exceptional circumstances.
Following on from our highly successful coverage of the healthcare sector in the April issue of Digitalisation World, the May DW includes a major focus on the technology challenges and opportunities as the coronavirus continues to cause major disruption to almost every aspect of our lives. The feature runs throughout the magazine, and includes a range of articles and news updates. Part 3.
Millions of NHS patients with muscle and joint pain can now be triaged through a physiotherapy app using conversational artificial intelligence, helping to reduce pressure on the NHS amid Covid-19.
Announced today, the partnership between Healthshare and high-growth technology company EQL will enable an additional one million people across Westminster, Oxfordshire and Hull to access Phio. An interactive web-based application, Phio combines pioneering technology with best-in-class clinical oversight, to make the musculoskeletal referral process more efficient for both patients and clinicians.
Before Phio, the typical patient with musculoskeletal pain — for example, back pain — would first see a GP, who could refer them to a physiotherapy department, which could then arrange an appointment for the patient to attend a face-to-face assessment with a musculoskeletal physiotherapist. Even before the coronavirus pandemic, someone with musculoskeletal pain could wait up to four months for an assessment, at which exercises are prescribed (NHS England and NHS Improvement, 2019). Making this process more efficient is essential, as early access to physiotherapy and advice expedites recovery, improving the patient’s short-term and long-term health outcomes (Addley, 2010; Boorman, 2009).
Phio delivers digital-first physiotherapy triage for the NHS, covering all types of musculoskeletal pain. This removes the need for a GP appointment, capturing key information so that the patient’s first face-to-face appointment, if clinically appropriate, is with the musculoskeletal physiotherapist. Phio’s conversational artificial intelligence guides patients through an initial assessment, helping clinicians to separate patients into groups of those who need urgent help, those who can self-manage and those who would benefit from a face-to-face appointment. Healthshare, a community-based healthcare provider led by ex-NHS clinicians and working exclusively with the NHS, has responsibility for managing triage of NHS musculoskeletal patients across Westminster, Oxfordshire and Hull - the partnership with EQL transforms this into a digital-first experience, which is particularly pertinent during lockdown.
To contain the spread of coronavirus, it was announced that all non-essential face-to-face outpatient appointments would be postponed across the NHS, starting from the 20th March 2020 and effective for at least three months. During this time, Phio will play an important role in reducing GP appointments related to seeking referral to physiotherapy services. This takes considerable strain off primary care, as 30% of all GP appointments in England relate to musculoskeletal pain (NHS England, 2018).
EQL’s partnership with Healthshare builds on similar partnerships, which will enable a total of 8.4 million patients to access Phio — equating to 12% of the UK population. The team is developing new partnerships, with the ambition that everyone in the UK — and beyond — will have access to Phio. EQL’s partnerships will transform physiotherapy during and after lockdown, with the contracts signed for a period of years.
Jason Ward, Chief Executive Officer and Co-Founder, EQL says:
“We want to support patient pathways at a time of extreme pressure on the NHS. We believe that digital technology holds the key to supporting patients within their own homes. Since lockdown started, we’ve had a 96% completion rate of Phio triages and 100% of people who used the system said they would recommend it to family and friends. We’re proud that our partnership with Healthshare will enable an additional million people to access this support.”
Peter Grinbergs, Chief Medical Officer and Co-Founder, EQL adds:
“As a physiotherapist myself, I know first-hand how debilitating musculoskeletal pain can be for patients. As well as the pain itself, individuals sometimes find that their mental health suffers in the face of longstanding discomfort. We created Phio with the vision of providing best-in-class, at-home triage. Our partnership with Healthshare is an exciting development in our quest for digital-first support — empowering patients to take control of their muscle and joint conditions, whilst alleviating pressure on the NHS at this very busy time.”
Jurys Inn Hotel Group turned to Content Guru to open a new Philippines-based customer contact centre, ensuring 24/7 voice channel availability for hotel customers in 36 locations around the world.
Content Guru is working with Jurys Inn, the leading hotel group operating across the UK, Ireland and Czech Republic, to service its global customer base from a new contact centre in Cebu, Philippines. Deployed within a week, the flexibility, scalability and agility of Content Guru’s stormⓇ platform meant Jurys Inn maintained an optimised service for all customers during a time of significant uncertainty, delivered by agents in a new Philippines-based contact centre now able to work remotely.
With 38 hotels, approximately 7,500 rooms and 4,000 employees, Jurys Inn is committed to ensuring all its customers have 24/7 access to voice contact channels with a consistently excellent experience on every call. This has been of paramount importance in the decade Jurys Inn has been working with Content Guru. However, with call volume increases of 300-400% due to travel uncertainty around the COVID-19 pandemic, maintaining this high level of support for customers seeking to change, amend and cancel upcoming business, leisure and corporate reservations has been vital. By deploying a scalable and agile omnichannel contact centre environment with Content Guru, Jurys Inn has maintained complete operation of its contact centre across all channels and significantly minimised call duration for customers contacting the hotel group directly.
Speed of deployment for the new contact centre was crucial, but critically, Content Guru’s storm platform also enables the 45 employees of this brand new contact centre in Cebu to work entirely remotely. Utilising Web Real-Time Communication (RTC), the platform means all contact centre agents can access the same browser-based interface they would in a physical office, with effective security and the ability for supervisors to monitor and support agents in the same way while they work from home. Running on low bandwidth, the solution keeps agents fully operational despite the typically poor internet connectivity in Cebu.
“Working with Content Guru for our new contact centre in Cebu was absolutely painless,” said Adrian Bingham, Head of Customer Contact and Data Integration at Jurys Inn. “I’ve worked on a range of offshore contact centre projects during my career and I have never known a transition of this scope to happen so quickly or smoothly.”
“Despite the growing pressure and uncertainty in these unprecedented times, within a week of starting the new deployment - if that - Content Guru had our phones ringing in the Philippines. The fact that we have not had any issues since - particularly with the poor internet in the area - is testament to the robust nature of the platform,” continued Bingham.
“This comes at a crucial time for Jurys Inn and our customers. The foundations we have laid now will prove pivotal in the coming months, both as we continue to support our customer base through this pandemic and by ensuring we are one step ahead in preparing for operations to return to normality as travel restrictions are gradually lifted across the globe,” Bingham concludes.
“Our award-winning Contact Centre-as-a -service (CCaaS) solution means Jurys Inn can deliver the same high quality customer experience the group is known for, even in a time of vastly increased demand,” said Martin Taylor, Deputy CEO and Co-Founder at Content Guru. “Jurys Inn agents based around the world can take calls whether they are on-site or at home, without compromising the experience for customers calling in to the contact centre,” Taylor concludes.
Public sector clients including local councils and NHS trusts have rapidly transitioned to remote working using IMImobile’s products.
IMImobile PLC’s cloud contact center software has enabled remote working for its clients following the COVID-19 outbreak. Customers including Hertfordshire Partnership University NHS Foundation Trust, Dudley Metropolitan Borough Council and Bouygues Energies & Services have recently transitioned their office-based contact centers to homeworking due to the lockdown measures introduced by the UK government.
IMImobile strengthened its contact center offering last year through the acquisition of UK-based contact center software provider, Rostrvm. The integrated omnichannel solution consolidates voice, messaging and social customer service channels into a unified agent console and simplifies contact center operations for businesses and public sector organizations. With the product and associated infrastructure hosted in the cloud and delivered as a service, contact centers have easily transitioned to working from home without any service interruption.
Joanne Osborne, Operational Team Leader at Hertfordshire Partnership University NHS Foundation Trust, said: “It has been fantastic that we have been able to transition all 50 of our agents to remote working in a really short space of time. As the mental health service provider for Hertfordshire, it is critical that the lines of communication stayed open without interruption, so we didn’t miss any calls from patients. This crisis has demonstrated that, with the right software, teams are now able to all work from home with ease and this will change the way in which we work in the future.”
Yvonne Steele, Team Manager of Income, Rent to buy & Leasehold Services at Dudley Council, commented: “We have seen an increase in the number of calls to the department due to COVID-19 with concerns about meeting rental payments from those who have lost their jobs or been put on furlough. IMImobile’s contact center software has enabled a seamless transition to remote working overnight and has removed the need for fixed, office-based agents. Before its implementation, the team had no idea how many calls were coming in or the peak times for those calls, now we have greater reporting and real-time monitoring capabilities and even the ability to automate outbound interactions via calls or messages to free up time for the team.”
The solution has been instrumental for clients such as Bouygues Energies & Services, who provide facilities management services for a number of large public sector organisations. Scott Hulse, Security Manager – National Operations Centre at Bouygues Energies & Services, said: “The agile nature of IMImobile’s software has enabled all of our agents to work from home without any issues. We believe that this crisis will accelerate business adoption of flexible working practices and digital communication solutions because not having the right software in place hampers ability for businesses to react at speed during an emergency situation.”
Commenting on the developments, Sudarshan Dharmapuri, EVP Products at IMImobile, added: “We’re pleased to witness our products enabling the levels of agility that organisations need to operate in today’s environment characterized by unprecedented disruptions. Our cloud contact center product allows customer interactions, automation flows, and businesses processes to be configured, as opposed to programmed, enabling the flexibility to transform operations overnight.”
By signing up, influencers can use their content and production expertise to get vital authentic messaging quickly and clearly to their millions of engaged followers - fighting the COVID-19 infodemic.
Billion Dollar Boy, the creative agency for the influencer age, and the International Federation of Red Cross and Red Crescent Societies (IFRC) are creating the world’s first global influencer network to tackle the COVID-19 infodemic.
Edward East, CEO and Founder at Billion Dollar Boy, said: “Social media should be a very effective tool for reaching a global populace. But with misinformation and fake news so prevalent, that message has to be unified and delivered from trusted sources. Our IFRC network of influencers ticks both these boxes. Instead of being continually vilified, influencers can now put their vast skill-sets to work delivering potentially lifesaving approved messaging to millions of people when it is needed the most.”
The network will launch with more than 30 influencers from across four continents with a combined reach of more than 2 million followers, with more expected to join every day.
Influencers signed up already include Italy’s Antonio Nunziata (230,000+ followers), the UK’s Katie Woods, (190,000) and UAE’s Neda Ghenai (116,000). Influencers looking to get involved can sign up here: https://www.billiondollarboy.com/ifrc-call-out-to-influencers/.
Every week the IFRC will send the influencer network an approved message that they want to disseminate. The influencer will then take that messaging and create their own content. This will then be vetted by Billion Dollar Boy and officially approved by the IFRC for distribution. There is a landing page here, where influencers can apply to volunteer their time and content to the cause.
Nichola Jones, IFRC’s Cross Media Manager, said: “Getting the right information out there when an emergency strikes is as important as healthcare. Making sure people have access to facts and trusted sources in a situation like this saves lives.”
“Influencers have a crucial role to play in tackling this infodemic and cutting through the noise. They have a level of access to younger people that public authorities or charities don’t have and their relationship with their followers is different. By working together, we can make sure credible content reaches a broader audience and has a positive impact.”
Influencers, who are often unfairly portrayed negatively in the media, can now quickly and easily put their expertise, skills and millions of followers to positive effect during an unprecedented crisis. And because their followers engage with them and trust their content, they are perfectly placed to combat the spread of misinformation and show solidarity with approved IFRC messaging.
Since lockdown in countries across the world, social media use has skyrocketed, making it an ideal channel for reaching people. Data from global research company Nielsen says 33% of people are spending more time on social media during lockdown while Facebook’s own data shows that 70% more time spent across their apps since the crisis with Instagram Live and Facebook Live views increasing by 50% in March.
Back in 2003 the Harvard Business Review published an article, “The Quest for Resilience” . It stated that “The world is becoming turbulent faster than organisations are becoming resilient.” Fast forward some 17 years and while many companies have improved their ability to respond to the ebbs and flows of business it’s fair to say that no one could have anticipated our current predicament.
By Phil Rose, Co-founder Ignium.
The point the article made back then still resonates today. “In the past, executives had the luxury of assuming that business models were more or less immortal. Companies always had to work to get better, of course, but they seldom had to get different—not at their core, not in their essence”. Our ability to respond to the massive change around us today, and to re-imagine the very model on which our businesses have been built, will be the biggest test and indicator of our future successes, and indeed, our survival.
Business resilience is about being able to anticipate and respond to changes that impair the ability of the company to both earn money and deliver on their purpose – the real reason they exist – and, importantly, to bounce back when change happens. It’s critical that organisations set themselves up to withstand shocks and deal with uncertainty, remaining agile to change before change becomes necessary. Unfortunately, no one could have ever imagined the shockwave that is being felt across the globe right now. It’s therefore even more important that business leaders consider how they can best support and respond to each other, their employees, as well as the communities and countries in which they operate.
There are both operational and financial dimensions to meeting this challenge. The most crucial, and often over-looked part, is the human side of resilience. For it is with and through people that all businesses exist and thrive. As the magazine Psychology Today says:
“Resilience is that ineffable quality that allows some people to be knocked down by life and come back stronger than ever. Rather than letting failure overcome them and drain their resolve, they find a way to rise from the ashes.”
The need and desire to ‘rise from the ashes’ will resonate with many of us at this time. Businesses are already re-inventing themselves to survive in this new normal and people are learning about what makes them truly happy.
In business, as in life, we all need some form of a feedback mechanism. We need to know that we’re heading in the right direction and that we’re on track. That’s a key measure of resilience. The decisions you make today will inevitably have an impact on the results you achieve tomorrow so it’s important to know where you are relative to your plan - If you have one! Measuring both personal and organisational resilience enables business leaders to establish how well their company is set up and how to manage change and deal with adversity.
We know what resilience means but how do you become resilient? There are some key steps and while it’s better to create a plan for change before events happen many won’t have had that opportunity. We’ve put together these top ten tips for leaders, business owners and employees to help develop the qualities, navigate the current turbulence and disruption in our world and to arrive stronger on the other side.
1. Adaptability and speed of decisions - To adapt, businesses need to make fast decisions and sometimes based on limited information. Be vigilant and alert to changing conditions. While change is inevitable it’s the response to that change that matters most
2. Accountability - accept that mistakes can and will happen. Being accountable is crucial and enables people to truly step up and accept that they must stand up for the decision they make. It’s also important to note that the information we had yesterday is now out of date so recognise that people are human. Retribution will inhibit their decision-making ability and their willingness to step up and be accountable
3. Flexibility - leaders need to be flexible and open to new ideas that may only be ‘80% perfect’. Just as the wind and storms change a skipper’s tactics so too will leaders need to change theirs
4. Optimism grounded in realism - avoid the negative media frenzy and stay in touch with your own feelings for positivity and optimism. That’s what employees expect. We may not have all the answers but keeping a sense of optimism grounded in the reality of the situation is key to coming through the turmoil in the best possible shape
5. Innovate and act on that innovation - people need to be given the time, tools and permission to think differently. Innovation can’t be left to one person or one department; now is the time to make innovation part of everyone’s job description
6. Get it done culture - resilience is about making things happen once a decision has been made. Quick decisions need to be implemented to be effective and that’s where the mindset and attitude of the leadership team, and throughout the business really matter
7. Trust your team - it’s imperative that leaders seek the perspective of others, trust their team and delegate where appropriate. Just because someone brings a different perspective to your own doesn’t mean either of you is wholly right, or wholly wrong
8. Be self-aware - Leadership is about two things: leading from the front and generating ‘followership’. Having an awareness of how you ‘are’, how you feel and how you react is key. Self-awareness leads to a greater authentic leadership style and at times like these true authenticity is needed
9. It’s all about your people - people are the biggest asset you have. Recognise and acknowledge that the people in your business are ‘emotional’ humans and therefore need support as well as guidance. Be mindful of each other’s feelings while presenting a positive mindset. Lead with humanity and purpose. Sometimes people just need an arm around their shoulder for support or to cry on
10. Give people meaning - great leaders recognise that people come to work to earn money and bring good to the world. It’s your job, as a leader, to help your organisation and people focus on the ‘why’ of the business and therefore better engage your people now and in the future.
In the digital era, it's now easier than ever to support a remote workforce as so many applications are moving to the cloud. Applications such as Office 365 are ensuring that employees can work from anywhere, which is especially important in the current environment, but this also raises issues for IT managers.
By Kathie Lyons, EVP & GM of ParkView at Park Place Technologies.
IT teams are now having to cope with an increasing number of remote devices and services working beyond their own local network. This means monitoring and managing firewalls and VPNs while ensuring minimal network disruption. Moving applications and services to the cloud also allows businesses to cut down on hardware costs. Automatic software updates and the ability to deal with ever-growing or fluctuating bandwidth demands are also plus points when working with applications beyond the local network. The combination of these benefits enables businesses to react more quickly to evolving market conditions. And with a third of enterprise workloads now running in the cloud and just 21% hosted locally, it appears that migration to the cloud is set to continue.
While moving applications to the cloud has lots of advantages for businesses, it also presents some new challenges. The key challenge with moving applications to the cloud is that this puts them well beyond the scope of most businesses' existing network monitoring and troubleshooting capabilities. IT managers need to be able to monitor application and service performance beyond the edge of their own network. This is becoming increasingly complex as new users, services and technologies are being added all the time. Managing the entire network properly is a major challenge for businesses in the increasingly cloud-centric world.
The ability to view the network whether it’s on-prem or in the cloud is an absolute must for businesses in the digital era because you can't manage what you can't see. An all-in-one management platform is the key to complete real-time visibility, enabling IT managers to monitor everything from a 'single pane of glass' so that they can pinpoint and tackle any issues that arise straight away.
Without the ability to monitor applications and services beyond their own firewall, IT managers will be unable to tell where traffic is routing or where any potential issues might be. If they don't have complete visibility they cannot see whether traffic is using the correct paths and so they are unable to guarantee that it is using the most secure route possible. This could result in security problems as well as poor performance of applications and services to end-users ultimately adversely impacting the business.
As well as having a central point of control, organisations also require a flexible management platform that can be configured and scaled up to fit their specific business needs. Support for virtual platforms is also necessary as businesses continue to move through their digital transformation journey, along with the ability to support an unlimited number of users.
It's essential to use a network management platform that provides an up-to-date view of all network assets. This enables IT engineers to quickly identify network issues so that they can address them as soon as possible. Management platforms should be flexible so that they can cater for a diverse range of industry sectors including financial, legal, telecoms, media, retail and public sector.
When managing the network, the ability to discover, trace and visualise application data paths offers businesses a major advantage. This enables IT managers to immediately spot problems with cloud-based applications or services, providing them with immediate locational and geographical context which they would not get from a simple table of data. By visualising data paths, network managers can easily spot bottlenecks, paths that have deviated from expected routes, or paths that haven't reached their destinations. As well as identifying the exact issue, this also tells IT engineers whether the problem has occurred within or outside of their local network.
Now that applications reside on the cloud it is still very important to be able to see and track latency issues for each application. High latency impacts an end-user’s productivity with poor application performance. Consequently, it is vital for IT managers to be alerted to any latency change so they can adjust before performance degrades.
Applications moving to the cloud is a trend that's set to stay, so it's vital that businesses have a way of ensuring visibility, both within their own network and beyond. Bridging the gap between application management and network management should be a priority in any IT strategy. Ensuring visibility over application network paths means that businesses can work smarter and faster, with minimal disruption, no matter where their employees are.
Following on from our highly successful coverage of the healthcare sector in the April issue of Digitalisation World, the May DW includes a major focus on the technology challenges and opportunities as the coronavirus continues to cause major disruption to almost every aspect of our lives. The feature runs throughout the magazine, and includes a range of articles and news updates. Part 4.
Forward-looking businesses are offloading data centre maintenance and management to the cloud to keep things running during the COVID-19 pandemic.
By Raja Renganathan, Vice-President at Cognizant and head of Cloud Services Business.
For years, proponents have urged businesses to better enable employees to work from home, citing benefits like increased productivity, less commute time, better work-life balance and enhanced preparedness for business continuity, should a localised disaster strike, such as a tornado, hurricane, earthquake or flood.
Overnight, the COVID-19 global pandemic made the final argument for work-from-home a reality for millions of workers – ready or not. Many global enterprises must suddenly support more and more people working remotely, whether they are equipped to deliver and support workloads at scale or not. This has sent businesses scrambling to quickly embellish digital channels and platforms, increase bandwidth, add virtual private networks (VPNs), provision more laptops, and offer thin-client applications to their employees and customers to improve operational collaboration and enforce social distancing.
A proper business continuity plan followed up with precision execution can ensure that enterprises deliver such capabilities. However, what happens to the on-premises data centre where a physical presence is required? Even with workplace virtualisation technologies like remote consoles and “out-of-band networks”, which reduce the need for on-site data centre operations staff, the fact is, physical boxes in on-premises data centres still need to be managed, guarded and secured by people.
Take the February 2019 data centre meltdown of a major U.S. bank, which crippled the organisation’s online and mobile banking capabilities. The company needed to shut down one of its data centre facilities due to a smoke condition. It took two days to bring the facility back up, and only with significant effort, which required the physical presence of data centre staff.
Imagine if this happened during the COVID-19 crisis. The time taken to fix the issue would increase exponentially due to a lack of people resources and hesitation to collaborate in-person. Even physical security could become compromised, which raises grave concerns.
An increasing dependence on the foundations of IT
The fact is, as our dependence on IT intensifies, data centres have become the substratum of how we live, work and play. From banking to insurance to 24×7 news, everything is supported by cloud infrastructure housed in virtual data centres. If these data centres go down, critical business functions, financial networks and in some cases our whole way of life become threatened. As a result, virtual data centres need to be continuously supervised and constantly cared for.
The Uptime Institute’s 2019 Data Centre Survey puts this into context, some of the findings include:
Why transfer maintenance responsibility to the cloud?
Forward-looking businesses are taking a different approach – they are offloading data centre maintenance and management to the cloud. One reason for this is scale, as cloud service providers (CSPs) have mastered the art of managing scale. In addition to proactively planning for capacity, businesses can leverage auto-scaling features to rapidly meet any unplanned surge in demand.
Cloud infrastructure is also highly automated and allows for the creation of scaling policies that set targets and add or remove capacity in real-time as demand changes. Thus, utilisation and costs are optimised, while the need for having more people on the ground is reduced.
Most CSPs now provide a multi-tenant architecture that allows different business units within an organisation or multiple organisations to share computing resources. This allows organisations to optimise their resources and staff vs. having their own data centres.
Lastly, the physical security in and around CSP data centres tends to be more robust and proven than what enterprises can individually afford. Most CSPs have rigorous and ongoing processes for assessment and mitigation of potential vulnerabilities, often performed by third-party auditors.
Planning for disruption
With many proven methods and tools, cloud migration is involved, but it is not difficult. For businesses that take a meticulous optimisation approach, the cloud can be more cost-effective than CapEx-hungry data centres. Even in the context of coronavirus, digital platforms running on the cloud can unleash cost and operational advantages via centralised control while meeting bandwidth challenges that flare up during peak usage periods.
Perhaps it is our hyper-connected world, but severe disasters seem more frequent than ever. As businesses respond to the many challenges COVID-19 presents, they also need to keep their eye on the horizon to prepare their data centres to withstand any disaster that strikes in the futures.
The IT world is fond of buzzwords and talk of the ‘next big thing’. A few years ago, it was the cloud and SaaS, then machine learning and automation, and now you are probably hearing discussion of AIOps and the potential it holds. But what is AIOps? Will it really transform IT and if so, how?
By David Cumberworth, Managing Director, EMEA and APAC, Virtana.
AIOps is artificial intelligence for IT operations and yes, it stands to transform IT and operations massively. The deployment of AIOps lets infrastructure owners use a vast array of real time data, algorithmic insight and machine learning to get the very best from optimizing on private and public clouds and automating the way companies migrate applications and workloads to the cloud or next generation platforms. In this article, I will explore how that can be achieved, and how AIOps can be seamlessly integrated into all forms and combinations of data centre(s).
AIOps, the cloud and the data centre
The past few years have seen mass migration to the cloud, and it’s now commonplace for an organisation to use cloud storage and cloud-based applications daily. Much of this data is held and managed in the public cloud (Azure, GCP, AWS, etc) and PaaS and SaaS providers. The public cloud is typically coupled with on premise infrastructure being managed by in house IT teams, third-party providers (System Integrators/MSP’s etc) and colocation hosting providers. Hybrid clouds, which make up this mix of both on-premises and cloud arrangements are increasingly the way enterprise organisations consume IT.
The benefits of moving to the cloud are well understood; business agility, massive scale, ultimate flexibility and organisations not to be consumed with spending huge capex and opex to run their own data centres.
The widespread adoption of the cloud however and resulting hybrid cloud environments, has generated a highly complex IT landscape that now requires infrastructure owners to integrate, monitor and maintain multiple applications, infrastructures and locations simultaneously. Machine learning, and AI algorithms, now permit the automation of many routine tasks, but the management of this technology must also be incorporated into the organisation’s IT domain, and this challenge will grow as the use of AI expands.
As the options for hybrid/public/on-premises data centres have proliferated, organisations have moved between them according to specific business needs. This means that cloud migration is not a one-time event. We read about significant reassessment of cloud consumption due to high costs, performance issues or simply, the migration of a very complex infrastructure to cloud/colo being too much of a heavy lift. As the cost models improve, technology such as containers making migration easier and increased agility the cloud provides, we will see continued migration to and from cloud for the foreseeable future.
For IT teams, multiple clouds and migrations result in a plethora of management paradigms and are extremely difficult to manage, let alone optimise. There are diverse hosts and/or applications being managed through a wide range of tools and accessed through different dashboards. These lack natural synergies and they are not context aware. For example, if there is an outage in one silo and an application performance issue in the cloud the fact the tools cannot speak to each other requires a lot of manual intervention to try and root cause problems – which is a pity, because if those synergies could be realised, providing an overview of the entire IT infrastructure and its functioning from a single viewpoint, the business impact would be huge; reduced outages, increased productivity and the resulting positive revenue impact.
Well, now such synergies can be realised – through AIOps – and the potential is indeed vast.
AIOps brings it all together
The evolution of IT, particularly in the last few years, has resembled that of the motor car. In its early days, the car was managed, driven and maintained by the owner, with input from a third-party specialist. Today, the car is driven by its owner, and still has attention from the specialist, but much of the process is augmented by an on-board computer that keeps it running and diagnoses faults.
AIOps gives infrastructure owners capabilities comparable with those of specialists working on modern cars who rely on tooling and diagnostics to highlight the problem; the machine augments the technician providing insight across all tiers of infrastructure, regardless of location or data centre type, via a single interface.
Using AIOps this way provides two key benefits: The ability to see all applications and functionality in real time and in context and allowing the organisation to pre-empt outages and issues that the AIOps algorithms detect, and to use AIOps real time analytics to optimise choices around operations and infrastructure and what application and workloads should be moved to the cloud, based on algorithmic insight and reliable predictions of future consumption.
The beauty of AIOps lies in its ability to cut through the ‘noise’ generated by the many moving parts of modern IT infrastructure, and show clearly what is working and what is not (or may not, in the future). This gives IT teams the power to predict and avoid outages based on historical data, to expedite and guarantee successful cloud migrations and to make real time decisions around workload and application placement. This, in turn, lets the organisation get the best from cloud capability, maximise data centre value for money and optimise infrastructure resource/capital spend.
Understandably, some are reluctant to throw an entire business behind this new concept straight away – although the signs suggest that AIOps, rather like driverless cars, will in time become the new normal. But a gradual introduction is in any case perfectly feasible — this capability can be applied across the board or used with a few initial applications and then scaled up in ‘baby steps’, according to business objectives.
In short, AIOps is no mere buzzword or ‘next big thing’, but a transformative step in the evolution of IT. And since it will make IT managers’ jobs easier, more efficient and – hopefully – more appreciated, it is surely a step to be welcomed.
The number of regulatory standards and security best practices infrastructure teams have to comply with, and the associated penalties for not doing so, are no laughing matter.
By Jonny Stewart, Principal Product Manager, Puppet.
Auditors expect their I&O teams to implement and abide by operational, security and regulatory policies 24/7. The risks for failing to do this can be severe and costly. Rarely do you hear that I&O teams are getting more budget and it’s not like their job is getting simpler with the onset of technologies. In fact, infrastructure is getting way more complex and hard to manage manually - which is what a lot of companies still do.
So, how do companies stay on top of these increasing pressures? By following DevOps principles of cross-team collaboration and implementing both automated compliance assessment and vulnerability remediation, companies of all sizes will find that many of these burdens are lessened.
Compliance in an age of regulation
As organisations scale, their IT infrastructure inevitably becomes more complex – huge numbers of servers, firewalls, routers and switches needing to be managed, hundreds of devices often from different technology vendors also need to be configured and maintained, often manually. A vastly increased IT footprint leaves more space for vulnerabilities to take hold. Compliance, when dealing with these issues, is becoming even more time consuming, and in turn, even more costly for enterprises of all sizes. Manual record keeping and endless spreadsheets to stay on top of what was patched when and what passwords need updating next week are simply unmanageable.
Since the creation of the European Union’s General Data Protection Regulations (GDPR), governments across the world have queued up to implement their own data protection laws. This can pose a particular problem for large businesses that operate across national borders. Legal definitions, imposed by GDPR for example, are different to those imposed by the California Consumer Privacy Act (CCPA) and businesses that deal with data across these two locations must comply to both, nevermind their internal compliance with benchmarks such as CIS. Failure to do so can put enterprises at risk of huge and sometimes catastrophic fines – up to 20 million euros or 4% of annual global turnover, in the case of GDPR breaches. The pressures of these regulations are only heightened by the abundance of industry standards and best practices that are a constant feature of IT and compliance work. The only way to avoid this is to ensure that compliance is consistent and continuous, that it is proactive, rather than reactive.
Changing company culture
While tools and technology can do a lot to ensure continuous and repeatable remediation of vulnerabilities, it alone will not solve all the problems. One sure fire way to tackle a lot of compliance-related issues within an organisation is to invest time and energy in changing company culture. If an organisation embraces DevOps principles and promotes cross-team collaboration and communication – the resulting synergy can remove many of the hurdles that currently stand in the way of efficient compliance. Implementation of technology that can automate much of the usual compliance workload can only happen effectively if siloed teams begin to work towards a common goal.
It is too often the case that security teams and IT teams operate adrift from each other to the point where it almost seems as if there exists a false belief that there is some sort of rule or regulation that prevents IT Ops and InfoSec teams from collaborating or even meeting!
But both teams are responsible for compliance, so by not working closely together they make it much harder for themselves. For example, many IT teams do not have the correct access to APIs that would allow them remediate vulnerabilities swiftly. Sometimes getting such access can take weeks or even months, the whole time extending the period in which the company’s infrastructure is at risk of drifting from its desired state of being kept up-to-date.
The knock-on effect of this is, of course, that it also extends the period in which a company could find themselves falling foul of compliance and security standards and regulations as security patching is not done in a timely manner and all necessary records are not being kept. There is no tangible reason that this needs to be the case or that IT teams should not have access to the live data that would allow them to identify and remediate vulnerabilities as quickly as possible and ensure compliance standards are adhered to.
Once these two teams start to collaborate more, they will quickly be able to identify and share their pain points before eliminating any unnecessary steps in their compliance and security processes. Following this, the introduction of compliance as code can begin to automate many of the remaining stages in the process from identification through to remediation of vulnerabilities.
Create a Sound Process to Adhere to Standards and Create Repeatability
Manually synchronizing policy enforcement and compliance at scale is not an option. The right processes and technology should also be put in place to ensure that people in every corner of the organisation always adhere to security protocols. Especially as digital infrastructure needs continuously updating, improving and scaling in the face of this ever-changing regulatory landscape.
For an IT team to manually address even the smallest issues is incredibly time consuming and inefficient – especially if you consider the different processes, tools and internal protocols used by various departments within a large enterprise. The further down the digital transformation journey, the more essential the automation of these processes becomes. The good news is that there is technology out there that makes compliance-related work easy, or at least less burdensome.
The best of these tools will enable the IT team to automate configuration management, allowing them to rapidly scale compliance processes. By utilizing these tools, IT teams can describe their desired state, describing configuration in a manifest just one time only, which is then automatically applied to the entire infrastructure. When the compliant state has been defined and is running across the entire IT stack, these tools can continuously monitor, enforce and remediate using automation.
These basic checks, which are often time consuming and repetitive to do manually, are the kind of tasks that machines excel at. Your I&O team can then be freed up to do what they excel at: seeking out and tackling new, more complex threats to your IT infrastructure. This automated process can be repeated again and again meaning that both I&O and InfoSec teams can rest safe in the knowledge that these tasks are being done.
Automation helps your bottom line
Finally, and let’s not be coy about this, what automation essentially does is save your organisation money. Sometimes vast quantities of it. Digital transformation is all about efficiency and smarter ways of working. A huge part of this is automating simple but time-intensive tasks. The problem with compliance work is that it can be hugely time-intensive, and very expensive should it go wrong.
Implementing new systems configurations across an entire organisation’s infrastructure can take a long time. Having to do this repeatedly in order to comply with new and evolving requirements from regulatory or industry bodies only increases time spent on these tasks. Preparing reports for audits to prove that all of this is being done consistently and in line with best practice is another task costing your IT team valuable hours. And should something go wrong, a data breach, a cyber-attack, or a hefty fine for not being compliant could find your organization in real trouble.
The beauty of automation in an age of compliance and regulation is that it has the potential to solve these issues. It frees your IT teams up to focus on the things that really matter. Using compliance as code, all these tasks which, once upon a time, could distract your team for weeks or months, can take hours or minutes. Implementing automation reduces your organisation’s overheads dramatically and focuses IT operations on strategic initiatives. Every moment your IT team isn’t pushing new innovations and driving growth, is time wasted which is antithetical to what digital transformation is all about.
Following on from our highly successful coverage of the healthcare sector in the April issue of Digitalisation World, the May DW includes a major focus on the technology challenges and opportunities as the coronavirus continues to cause major disruption to almost every aspect of our lives. The feature runs throughout the magazine, and includes a range of articles and news updates. Part 5.
Contactless temperature measurement at main entry points in buildings.
Siemens Smart Infrastructure has launched the Siveillance Thermal Shield. This solution package quickly measures the body temperature of a person seeking to access a building and enables the results to be integrated into the video and access systems of corporations. Thermal imaging cameras are used to measure, in a contactless way, the body temperature at a distance of up to two meters, ensuring the safety of monitoring staff. If the camera screening indicates an elevated body temperature, a second reading must be taken using a medical thermometer to confirm the finding.
This solution package integrates the third-party screening camera with the Siveillance Video security platform and other security systems from Siemens. This allows the measurements to be seamlessly integrated into the workflow of the corporate security solutions. Using Siveillance Thermal Shield at the entrance to a factory building, for example, offers a quick and easy way to screen employees as part of routine access control procedures. This is particularly useful in the food industry where the Covid-19 pandemic has made production more challenging. Other possible use cases include hospitals and border crossings.
“Siveillance Thermal Shield improves the safety of all occupants in buildings or facilities”, said Joachim Langenscheid, Solution and Service Portfolio Head Europe at Siemens Smart Infrastructure. “We also advise companies on how they can use Thermal Shield for their industry-specific applications to optimize their security systems and procedures, and we support them in the technical implementation.”
To ensure the highest level of accuracy, the cameras measure the body temperature near the eyes. A positive result triggers acoustic and visual alarms. The temperature is measured for each person individually to ensure accurate and reliable results. If a person shows an elevated body temperature and this finding is confirmed by a second reading obtained with a medical thermometer, the follow-up steps defined in the workflows are initiated automatically.
European supply chains, retailers and logistics service providers can download an app giving real-time information on delays being experienced by trucks at European borders resulting from the Coronavirus pandemic.
Leading shipment visibility provider Sixfold, application development specialists FoxCom and consulting firm SpaceTec Partners are participating in the Galileo Green Lane initiative led by the European Global Navigation Satellite Systems Agency (GSA), the provider of the European Navigation System Galileo.
As a result of the COVID-19 pandemic, the European Commission requested Member States to designate TEN-T border-crossing points as ‘Green Lane' border crossings, with the expectation that these border crossings, including any checks, should not exceed 15 minutes on internal land borders. Galileo Green Lane will support the management of transit across borders, relieving the pressure of handling goods and allowing the quick passage of critical goods such as Personal Protective Equipment (PPE) including COVID- resistant theatre gowns and masks.
Galileo Green Lane aims to provide transparency to border authorities and freight transporters on the border crossing times at Trans-European Transport Network (TEN-T) border points. It leverages the positioning accuracy of the Galileo navigation system to locate incoming vehicles in a defined geo-fenced area surrounding critical borders. Location data generated at the border can also be combined with a geo-tagged photo to provide additional information. The solution relies on European GNSS services and infrastructure and demonstrates the resourcefulness of Galileo in crisis situations.
Through Galileo Green Lane, Europe's logistics industry gains access to a real-time overview of border traffic hold-ups, built on the foundation of Sixfold's COVID-19 map. As part of its growing role within Europe's supply chains, Sixfold took the initiative in mid-March 2020 to provide supply chains, retailers and shippers with a free live border crossing map which is updated in real-time. The map enables shippers to better understand the expected delays in receiving shipments as a result of the increasing number of border checks due to the COVID-19 crisis. Over 500,000 logistics professionals across Europe have since used the border map to better plan their transport routes to avoid lengthy delays at borders.
Sixfold, a leading European real-time transport visibility platform, is also the exclusive provider of such data for Transporeon, Europe's largest transport network. Integrating its real-time transportation data and advanced visibility platform into the Galileo Green Lane app forms an integral part of the EU's response to outbreaks of the COVID-19 disease.
The Galileo Green Lane mobile app itself was developed by FoxCom, a leading-edge software architecture and development studio focused on analysis, architecture, implementation, integration, deployment and maintenance of database driven software. Based in Prague near the GSA's headquarters, FoxCom took on the challenge of rapidly prototyping and developing an app solution tailored for freight transporters and border officials.
FoxCom and Sixfold were brought together by the specialized consultancy firm SpaceTec Partners who oversaw the coordination and operational management of this dynamic project. "The combination of Galileo Green Lane data and Sixfold's real-time visibility platform is a powerful tool for logistics companies to better understand delays being experienced by trucks at European border crossings," says Rainer Horn, Managing Partner of SpaceTec Partners. "In these troubled times, the app should become a stalwart tool of supply chains."
Wolfgang Wörner, Sixfold's CEO adds: "Sixfold has grown rapidly over the last couple of months and is now the real-time visibility provider-of-choice for shippers, logistics service providers and carriers. Building upon that momentum, we decided at the outset of the COVID-19 crisis to utilize our market-leading visibility platform to help all in Europe's supply-chains to better manage delays in crossing borders. Clearly, we are delighted to collaborate with the GSA and the European Commission to serve even larger audiences."
"The Galileo Green Lane app is an excellent example of how Galileo is enabling young European smart companies to produce innovative apps that tackle global challenges," adds Pascal Claudel, Acting Executive Director, European GNSS Agency.
Digital monitoring, implemented during the COVID-19 pandemic, might become the new normal.
The COVID-19 pandemic has led governments to the implementation of mass surveillance measures. Experts fear that this invasion into people’s digital privacy might not be that easy to roll back.
“We at NordVPN agree that all appropriate measures should be taken to stop the pandemic and save people’s lives. But we are also digital privacy advocates. The new surveillance on people affected by COVID-19 undoubtedly restricts some freedoms and rights. What is more, some countries are using surveillance without an appropriate legal basis. That means no one knows how this data is processed or what will happen with it in the future,” says Daniel Markuson, digital privacy expert at NordVPN.
“The coronavirus pandemic shows that surveillance technology is already here — it’s not something out of sci-fi movies. It’s up to governments to ensure the proper use of these new technologies, as the potential abuse of power could violate human rights,” says Daniel Markuson, digital privacy expert at NordVPN. “Another worrying thing about these surveillance policies is that they don’t have any clear end date. Therefore, this might be deemed as the new normal even after the pandemic ends. Governments must provide a clear exit strategy and define a date when they will cease the monitoring.”
At least 25 countries have implemented digital surveillance over its citizens to combat COVID-19. The methods and scope of monitoring differ in each country. For example, China has forced hundreds of millions to install a “health code” app, which determines whether the user is fit for travel or must stay at home.
In Moscow, citizens will be required to use QR codes for traveling. The Russian government will also employ surveillance cameras and facial recognition technology to ensure people are staying at home.
Europe has followed the Asian example and is copying the tracking apps, as well as employing drones and collecting telecom data.
India is geolocating people’s selfies as well as releasing addresses of the COVID-19 patients.
Israel has implemented surveillance on a national scale. People with suspected or confirmed Coronavirus cases are tracked by mobile phone.
In South Korea, the government sent out detailed messages with travel information of coronavirus-infected people. Although the purpose was to inform on possible contact points, the texts revealed personal details, which, in some cases, were embarrassing. For example, some people had involved in affairs or paid for sex.
talent.io switches to free model to bring tech recruitment back from standstill.
As the impact of COVID-19 continues to force tech companies to hold their spending, new research from talent.io has revealed that 38% of European firms are freezing most or all of their tech recruitment. In London alone, there has been a 57% drop in the number of companies creating new permanent tech job listings.
“Our insights into the state of the recruitment market reveals the harsh reality of the current pandemic,” comments talent.io co-founder Jonathan Azoulay. “What’s concerning is that London, which is one of the most important tech hubs in the world, is one of the hardest hit regions in Europe.”
talent.io’s market research is based on both internal and external analysis of COVID-19’s impact on the tech recruitment industry across Europe. It has monitored hiring activity of over 5,000 tech startups, unicorns and corporate businesses on its selective recruitment platform and surveyed over 8001 of them further. It has also tracked thousands of job board listings across LinkedIn, Glassdoor and Indeed, as well as observed recruitment activity from an extensive number of Google keywords and paid marketing campaigns.
Meanwhile, a separate report from Beauhurst, which analyses investment activity of the UK's fastest-growing companies, says 22% of jobs in high-growth tech companies were under “severe to critical risk” – equating to 615,000 people in the UK who could lose their jobs.
Jonathan continues: “The tech industry must not standstill which means companies need to invest in the right people now so they can thrive in the future. This is why we are switching to a 100% free model to help companies for the duration of the crisis. We also hope this will, in turn, help tech workers who have sadly lost their jobs find new positions.
“Our vision has always been to offer the simplest solution for matching tech talent with great tech projects. From day one, our approach has been to find the tech industry’s pain points – which is most often scarce cash resources – and use our platform to remove them.”
The 100% free hiring model announced by talent.io will see hiring fees drop from an average of £8,000 per hire to £0 for at least three months. This will give tech companies a much-needed breather and allow them to invest in what matters most in the long run – their teams.
Technology advancements are changing the way employees work and where they work.
By Matt Saunders, Head of DevOps at Adaptavist.
For the benefit of the wider community, employees where possible are working from home to combat the threat posed by COVID-19. However, having a distributed workforce requires forethought on how employees can seamlessly communicate with each other, regardless of their location. The particular challenges posed by COVID-19 mean that new groups of employees are now being asked to work from home who are having to adjust to entirely new ways of working.
Using collaboration tools to improve productivity
While tools such as Slack, Trello, Zoom and Office 365 improve worker productivity by providing seamless collaboration, they also can encourage an always-on nature for the workers who use them. And as adoption of these tools’ increases, care needs to be taken to avoid information overload and burnout.
When tools are implemented properly, companies are breaking down barriers and opening up lines of communication across teams and departments. Collaboration tools give modern workers the ability to share files, audio and video conversations, making the virtual office a reality. Teams can meet on one channel, regardless of their physical location, to share and receive information. And, by using dedicated channels for a specific purpose such as discussing project issues, service outages, new product features, etc., conversations can stay focused on the topic at hand for faster resolution. In addition, tailoring collaboration tools using integrations and automation capabilities enable teams to remain on track and action insights in near real time.
Slack’s instant messaging capabilities help teams focus their discussions around resolving issues or progressing to the next stage of a project. For example, if your business uses a customer relationship management (CRM) tool like Salesforce, you can automatically spin up a discussion channel when you onboard new customers.
The future of collaboration tools
In the future, we can expect to see technologies such as artificial intelligence (AI), augmented reality (AR) and machine learning (ML) increase the effectiveness of collaboration tools - making them faster to adopt and more intuitive to use. The capabilities of workplace collaboration tools will evolve, leveraging team insights and bringing people together for more focused, productive, and powerful interactions.
Collaboration tools will become more personalised in the future, inviting a higher level of engagement on these platforms. AI will allow collaboration bots to become more sophisticated, allowing problems to be automatically solved in a conversational style. Visual collaboration tools will also evolve to streamline workflows and make meetings more immersive and productive. As remote work becomes more common, visual collaboration tools will be essential to keep remote workforces engaged.
Measuring collaboration success
Investing in practical training and onboarding practices is crucial to ensure teams get the most value from collaboration tools. Implementing these tools in a thoughtful and customised way will enable employees to focus on the job in hand.
To measure the success of your collaboration tools, you’ll need to establish a mix of quantitative and qualitative objectives. For example, are your teams accomplishing work more efficiently? Take, for instance, teams that are using Jira Service Desk and Slack – has there been a noticeable decrease in the time to resolve a customer query? Are teams consistently meeting service level agreements? Is the tool enhancing team activities, or is it getting in the way and hindering productivity?
In addition to productivity measures, measuring employee satisfaction is also crucial. Do your employees believe the tools are helping them, or are they contributing to information overload? Do your remote workers feel like they are fully part of your team? Are the tools helping them reach their development or professional goals? Can teams access everything they need from within the tool, or are there extra steps slowing things down?
With the nature of work changing even more and becoming more flexible, we will continue to see collaboration tools playing a critical role in helping remote teams connect with others, access information in real time and communicate with ease, regardless of time zone or physical location.
Following on from our highly successful coverage of the healthcare sector in the April issue of Digitalisation World, the May DW includes a major focus on the technology challenges and opportunities as the coronavirus continues to cause major disruption to almost every aspect of our lives. The feature runs throughout the magazine, and includes a range of articles and news updates. Part 6.
The public and private sectors must act now to enable access to critical services, tackle social exclusion, and enable career mobility for the offline population.
There is an urgent need to tackle the sharp digital divide between the world’s online and offline populations, according to the latest research from the Capgemini Research Institute, and intensified by the COVID-19 pandemic. Its latest report, launched today, highlights that the responsibility for addressing digital exclusion lies jointly with public and private organizations, who must come together to ensure that access to essential services isn’t denied to the digitally marginalized.
“The Great Digital Divide: Why bringing the digitally excluded online should be a global priority” reveals that even before the pandemic hit, 69% of people without online access were living in poverty[1] and that 48% of the offline population wanted access to the internet – trends that will have intensified due to worldwide events over recent months.
The report highlights that even without the global pandemic the digital divide intersects age, income and experience. Nearly 40% of offline people living in poverty have never used the internet because of its cost, and the age group with the highest proportion offline in the sample[2] is those between 18 and 36 years old (43%). Complexity of using the internet (36%) and a perceived “lack of interest” stemming from fear (38%) was also cited by certain segments of the offline population. These reasons mean that people are unable to access public services such as critical healthcare information as governments increasingly move to online resources.
COVID-19 has demanded a global change in how people live, work and socialize; as unemployment soars and people isolate from their communities, a basic level of digital inclusion has become almost universally vital. Conducted just prior to the outbreak, the research findings are now even more pertinent in the current context – with the increasing reliance on digital services exacerbating what was already a desperate situation for the offline population.
Key findings from the report include:
Being offline leads to social exclusion and hinders access to public services
Being offline limits career mobility
Difficulty in applying for jobs online and a lack of access to online learning and education tools can make upwards career mobility more challenging for the offline population, while a lack of digital skills development can inhibit the potential for career mobility once in a role:
The digital divide is also about a skills and learning divide
The digital divide is not just about access, it’s about improving skills and learning for those who are online. By improving their online skills, respondents said they could educate themselves better and find a better paying job (35%), give their children more opportunities (34%), not struggle to pay bills (33%) and get public benefits they don’t currently have (32%.)
The responsibility of bridging the gap must be shared
Capgemini’s research notes that the responsibility for digital inclusion and access to the internet cannot fall to one group. Private organizations need to consider their role in today’s world – increasingly beholden not only to stakeholders but also to their customers, employees and communities, they must look more broadly at how they can benefit society in the long term by incorporating digital inclusion and equality into their business strategy. Meanwhile, governments and the public sector need to play a leading role in enabling internet access and availability, especially for marginalized communities. This can be tackled at two levels: public access and private, in-home access, but it means creating greater accessibility for online public services and taking more responsibility for keeping costs low for consumers.
Together, organizations and policy makers need to work to build a global community of action on digital inclusion. They can mobilize peers, NGOs, academics, and governments to foster evidence-based policies on digital inclusion and work with partners to promote digital inclusion through pro-bono projects that leverage their expertise.
“COVID-19 is likely to have a lasting impact on access to public services and attitudes to opportunities like remote working, so there’s a collective responsibility for organizations which work to challenge the digital divide do so in a way that it creates a long-term change, not just a quick fix,” said Lucie Taurines, Global Head of Digital Inclusion at Capgemini. “In the wake of this pandemic, we expect to see a closing of the digital gap – for example, elderly people who have previously not felt a need for digital access will quickly find themselves engaging with digital tools in place of face-to-face socializing and the provision of goods. However, this is reserved for those who can get access to the internet but have previously chosen not to. The impact will be felt among those who still can’t use online services, whether through a prohibitively high cost or a lack of local provision. Here we’ll see a polarizing effect, especially for those already living in or falling under the poverty line.”
As an organization, Capgemini is focused on four key areas to reduce the digital divide and lead digital inclusion:
Since the outbreak of COVID-19, Shenhao Technology Co., Ltd has deployed smart robots that use Advantech technology to monitor body temperature and conduct identity detection in schools, banks, and other public places.
Shenhao, a leading provider of IoT products and solutions aimed at smart city applications, offers a smart patrol robot known as Health Guardian 1 equipped with Advantech's UNO-2484G edge gateway to support disease prevention and inspection. Because measuring body temperature manually exposes public safety personnel to potential health risks, robots that feature infrared sensors capable of scanning temperatures within a 5-meter area have been deployed. These robots conduct temperature measurements and identity detection, and all collected data is sent to a centralized server or management dashboard for health screening. To enable autonomous movement and navigation, the robots are also integrated with simultaneous localization and mapping navigation (SLAM navigation) technology. Thus far, Health Guardian 1 robots have been widely deployed in the southern cities of China.
Embedded computers play a significant role in ensuring the stable operation of robots and collecting accurate raw information. Accordingly, Health Guardian 1 robots were equipped with Advantech's UNO-2484G fanless, x86 industrial edge gateway for collecting data and executing commands without collision or accident. The UNO-2484G gateway is a highly ruggedized embedded operation system powered by an Intel® Core™ i7 processor that delivers high-performance computing. A built-in Intel® i210 Ethernet controller and 8 GB memory facilitate I/O-based control and convenient operation. Moreover, the system's rugged chassis protects against vibration, and the modular design enables flexible configuration for a wide range of applications.
An increase in video volumes
StarLeaf, the global provider of meeting room solutions and video conferencing services for enterprises, has released the StarLeaf Trends Report outlining the uptake of video conferencing usage seen across the business, during the coronavirus (COVID-19) pandemic.
The report collects data from a number of countries including the US, UK, France, Sweden, Germany and Italy. From the US notable findings include:
From the other countries surveyed, the data revealed:
Discussing these findings, Mark Richer, CEO, StarLeaf explains: “Video conferencing has played an integral role in the move to remote working, providing business continuity and helping people to work with colleagues and customers wherever they are, and this is supported by the significant growth of video meetings we’ve seen across the US. We predict that when lockdown restrictions begin to ease and US businesses start looking to the future, video for collaboration will remain a core and vital part of an organization’s way of working.
“The financial impact of coronavirus is undeniable, and we believe many organizations will need to deploy cost cutting measures. Physical office space will be one area under consideration, with many businesses potentially downsizing their workspaces or looking for flexible office space rather than long-term leases, made possible by greater numbers of staff being able to work remotely. We also can’t ignore the psychological impact of coronavirus. The idea of commuting back into busy central business districts (CBDs) will be daunting for many employees. Employers will need to be sensitive to this issue and offer greater flexibility to those who feel they need it.
“We are also likely to see a change in attitudes towards areas such as recruitment. Historically, the ability to employ the best people was restricted by geographical location. With more remote and flexible working practices, organizations will be able to think more broadly about who they employ and not be restricted by where that person is based.
Richer concludes: “One final consideration is the positive impact coronavirus has had on environmental sustainability. It’s a high priority for leaders in most organizations, and many will look at how coronavirus has improved their environmental impact and will want to build on this. We can expect to see more organizations re-evaluating their travel needs, opting to keep the more viable, environmentally friendly alternatives such as video meetings.”
Using SDM to eliminate traffic and bottlenecks in the delivery process.
By Anders Wallgren, VP Technology Strategy, CloudBees.
Traffic bottlenecks are a major inconvenience in our daily lives, causing delays during our commutes and preventing us from getting to our destination on time. Bottlenecks in the software delivery process can cause the same problems – and while most DevOps teams have made significant strides by implementing continuous integration (CI), continuous delivery (CD) and application release orchestration (ARO), many teams within the organisation are still operating in siloed development processes and are failing to communicate with one another.
This means that teams use different tools and work off of outdated, incomplete, or missing data and information; resulting in duplicated workstreams or application releases that are littered with bugs. As a result, developer teams are forced to restart projects because they didn’t have the information needed to get it right the first time; creating severe bottlenecks in the software delivery process.
CI and CD as a red stop light: limitations and constraints
While some organisations may have a mature CI/CD pipeline and be fully committed to DevOps practices, these businesses will often find they still lack end-to-end insight into their value chain. They cannot see where products are getting stuck or where problems are recurring – this lack of insight itself only exacerbates the delay in delivering the product to the customer.
Unfortunately, CI/CD doesn’t usually provide the information needed to measure how well the software organisation is creating value for the business. Without the data required to measure this, software organisations cannot understand if they will be successful in achieving their goals and how they can measure or track progress. Without visibility into all the various stakeholders involved in the delivery process, organisations cannot foster the required collaboration for successful DevOps implementation. Businesses don’t just need speed and agility when it comes to software development – they need actionable, data-driven insights to ensure that software is being developed with the right functionality to meet the business need it was designed to address.
All systems go: SDM to the rescue
In the same way that DevOps breaks down the walls between the development and operations teams, Software Delivery Management (SDM) breaks down the bottlenecks that delay the process of delivering software by ensuring that all artefacts and data are integrated into a unified common data layer.
Through its four key pillars – common data, universal insights, common connected processes and cross-functional collaboration – it helps organisations overcome the limitations of CI/CD by ensuring that key information is connected and easily accessible, giving each team an unprecedented level of insight into where bottlenecks and inefficiencies are occurring. It allows them to improve and streamline communication, understand each other’s needs and ultimately make software that is not only bug-free, but also effective at addressing business needs and creating value for the customer.
· Common Data: Instead of data being locked away in silos of domain-specific tools, SDM enables all stakeholders involved in software delivery to have access to the same data. Software developers can look at customer interviews to understand how features are being used, product managers can preview features to plan their product roadmap, and so on. Providing access to common data and context empowers stakeholders to make informed decisions.
· Universal Insights: As a result of common data, stakeholders can gain shared and universal insights. For instance, information about the software delivery process can be analysed by the customer success team to identify where a fix for a customer service problem is in the pipeline.
· Common Connected Processes: When an organisation’s processes and ways of working are disconnected, miscommunicated decisions and missed deadlines are aplenty, and inhibit both value and speed. However, when these processes - such as product planning, customer support, and software delivery - are connected by common data and universal insights, the rapid and continuous delivery of business value becomes the new standard. Collaboration is seamless.
· Cross-Function Collaboration: By establishing these three pillars, continuous and frictionless cross-function collaboration becomes natural; allowing all business units and stakeholders to gain transparency into data, tools and processes, analysis, and business goals.
By unifying software development and delivery teams, SDM ensures there is continuous alignment across stakeholders as software is being developed, ultimately ensuring the continuous creation of business value. SDM extends the feedback loop to encompass the entire application lifecycle, from issue creation to end users interacting with the application.
Paving the way for better business
With SDM, everyone from the software organisation through to product marketing and customer success has access to the same set of data and one unified platform for collaboration. Product marketers have a clear idea of how a feature will work, when it will be ready for deployment and the types of customers it’s best for. Customer support teams have visibility into when new features will be released and can alert developers to patterns in support requests. As the feedback loop widens to include other business units, developers get more valuable feedback that leads to more intelligent iterations and software improvements.
The aim of software development isn’t to create the most technically sophisticated application – rather, it is to use software as a tool to create business value as quickly as possible. Similar to DevOps, SDM not only brings together different parts of the engineering department, but facilitates collaboration between engineering and sales, marketing and other business units. The ability to bring business strategy and software engineering together is critical to creating applications that are as effective as possible at meeting the business goals.
No one wants to be stuck in traffic – SDM is the fast lane that is needed to escape congestion and accelerate enterprises reaching their destination.
The bottom line to most articles about Desktop-as-a-Service is that DaaS will lower your security risks. Explaining the how and why isn’t often delved into in any great depth, but understanding the reasons behind this benefit will allow you to see why DaaS could work for you and your business. Particularly at the moment, when most teams are working remotely.
By David Blesovsky, CEO at Cloudhelix.
Firstly, to explain that when we talk about DaaS, we mean, the delivery of a fully managed virtual desktop instance (VDI), which is hosted on cloud infrastructure. It’s not a new concept, but 15 years on from its initial inception, Daas is finally coming into its own.
Now, DaaS enables users to access corporate applications and data via a familiar Microsoft Windows desktop experience on almost any device connected to the internet. But its origins were formulated on, and continue to focus on, increased defence in the face of security and compliance risks.
Control and cloud clarity
The modern workplace is agile, and full of freedom. But with given freedom, the ability to control essentials whilst being unrestrictive is a difficult balance for many. With DaaS, the risks that naturally will arise from your staff working anywhere and on any device. You don’t have to worry about what data is held on the user’s devices, and more to the point, where that device gets left at the end of a long day.
DaaS moves the security risk from hundreds of end-user devices and puts it all into the controlled and managed environment of a data centre. The data remains at the data centre, and you have control over all the company assets, able to revoke access at the touch of a button.
Management with no mis-
Whether it’s controlling orphaned accounts from leavers to ensuring everyone has the latest patches and applications, these common logistical issues melt away when it comes to DaaS. One central image (or a few based on personas) are operated so that once a change is made, everyone is up to date.
And there’s no need for standardised hardware builds for end-user devices, because DaaS will run on almost any device, no matter the configuration. Your IT team can manage virtual desktop security just like they manage their existing infrastructure today, with the same credentials and permissions.
Secure separation
Whilst working with a provider like Cloudhelix, and a solution such VMware Horizon DaaS, you can ensure you’re getting complete network separation from tenants (preventing address collisions and unwarranted access) and tiered roles (to ensure the access we have is the access you want to give). For those wanting a little more technical info, the resource separation is enabled across:
● Storage: every tenant is assigned its own unique storage unit.
● Connection brokers/web application.
● Databases: including tenant passwords for encryption.
● Directory Services: each tenant is able to use its own AD system without any risk of improper security privileges leading to a security breach.
● Tiered tenant roles: focused on three levels of IT Administrator, End User, and Service Provider.
Disasters, disabled
No business can truly escape real disasters, even with a plan. More often than not, disaster recovery (DR) plans cover servers and networks but won’t protect desktops at all due to expense duplication when it comes to the traditional desktop set-up. But if your desktops go down, how will your employees work for the foreseeable? Or what if they rely on physical desktops but can’t get to the office? Before a DR plan might have considered freak weather instances or power outages, but how are you considering your set-up in the face of the worldwide pandemic crisis that Coronavirus has brought?
The challenge presented by Coronavirus means business as normal, albeit from home, and DaaS can be seen as the “Desktop DR Insurance Plan”. Not only will you benefit from having your desktops in a secure and highly available data centre, but the likelihood is that your service providers will host across multiple centres to ensure you’re up and running, no matter what.
A-to-B sanctuary
Working with a provider, such as Cloudhelix, will ensure that the infrastructure as well as the DaaS solution is secure from back to front. Confidence in the platform that underpins your DaaS is key, and we recommend a robust, scalable and secure environment from a provider who understands your business needs. If you’re moving away from onsite hosting, look for UK Tier 3 data centres, and ISO accreditations (ISO 27001 and ISO 9001) to guarantee data sovereignty.
With built-in security capabilities such as secure point-to-point network connectivity, dedicated compute, and network isolation, with DaaS you can have the confidence that your corporate data and applications are secure.
Following on from our highly successful coverage of the healthcare sector in the April issue of Digitalisation World, the May DW includes a major focus on the technology challenges and opportunities as the coronavirus continues to cause major disruption to almost every aspect of our lives. The feature runs throughout the magazine, and includes a range of articles and news updates. Part 7.
It’s perhaps an understatement to say that remote working is top of mind for many of us right now. The Coronavirus pandemic has forced most businesses to send their staff home for the foreseeable future, hoping to delay the spread of the illness and keep those around them safe.
By Darren Watkins, managing director for VIRTUS Data Centres.
Remote working isn’t a new concept and the technology to support this immediate need and to power the broader trend towards remote working is already available. Cloud platforms which give users access to a collaborative, scalable and convenient remote virtual work environments have been in use for some time.
However, today’s situation is somewhat different to that which IT teams have been used to due to the sheer scale of people working from home, all at the same time. Although the technology is available, only some workers have been using it either because of flexible working conditions or because their jobs are field-based. This sudden increase in usage is causing huge changes to the need for front end tools and hardware, but also the backend networks, servers and computer power that enables it all to work. Fundamentally, the backbone of successful cloud-powered remote working is the infrastructure that underpins it.
Getting it right today
The immediate, pressing need is ensuring employees have access to the cloud applications that enable access to a virtual working environment. The current surge in use of these applications is putting intense pressure on the security, servers, storage and network of organisations, and to deal with these new demands, IT departments are having to deploy more future proofing capacity management strategies to be able to meet their needs.
This puts the data centre strategy front and centre for IT managers - and it’s the outsourced and co-located data centres which are enabling businesses to continue to operate. Not only can they support demands for high-bandwidth and reliable connectivity, they also provide physical security, redundant power, expert monitoring and 100% uptime guarantees.
Reaping the benefits for tomorrow
In the face of a global crisis like the Coronavirus, the immediate priority for many businesses is to simply keep operations running. However, when it comes to remote working, it appears that the current crisis is “forcing the hand” of many organisations. If they are able to embrace more flexible working practices permanently, then businesses can expect to reap long term benefits. Cost savings could be achieved by making strategic decisions to save office space and businesses could be more agile to speedily take advantage of new business opportunities in different geographies. Winning the battle for talent can also become a reality by attracting and retaining staff who are unable or unwilling to work in the office, potentially harnessing the skills and experience of individuals who have to juggle child care or look after elderly family members with work, and individuals who don’t want to work in traditional office based environments.
Just ten years ago the idea of mass remote working would have been impossible - the underlying infrastructure simply wasn’t in place to support it. But today, the global data centre industry is already powering billions of internet-connected “things” and the vast volumes of data they generate - the backbone is firmly in place to help deal with the demands mass remote working will bring. And this is improving all the time. Increased deployment of High Performance Computing (HPC) provides a compelling way to maximize productivity and efficiency, and increase available power density - the “per foot” computing power of the data centre - crucial as we move away from centralized office hubs into thousands of disparate home offices.
Any discussion around data centres inevitably comes hand in hand with environmental concerns - and data centre providers are already working hard to fuel a power hunger industry with renewable energy. But one of the overarching benefits of remote working is likely to be in the form of serious ecological good, as commuting and business travel are significantly lessened. There are other benefits too. While there may be increased IT set up costs, the requirement for businesses to have expensive office facilities may become a thing of the past, powering a more nimble and cost effective business environment.
---
There is no doubt that the coronavirus will change people’s attitudes and behaviours – potentially forever. The technology industry is adept at finding answers to problems - and tech vendors are making swift progress in terms of security, collaboration, accessibility and storage solutions to help with the immediate need. But, it’s likely that this new world approach will be here to stay and remote working will become the new norm across sectors. Data centre strategy will become even more critical in ensuring the infrastructure is powerful, safe and reliable for people to work wherever they want, whenever they want.
Quantum is coming as a powerful challenge to cryptography and all of the information modern encryption keeps safe. Many understand that but view it as a distant threat. What they don’t realise is how far they’ll have to go to prepare for it.
By Tim Hollebeek, Industry and Standards Technical Strategist, DigiCert.
Post quantum cryptography (PQC) is rightly being heralded as our main defence. PQC algorithms that can effectively protect against Quantum attack and plug into existing Public Key Infrastructures (PKI) are being eagerly awaited by governments and enterprises alike.
According to DigiCert’s 2019 survey, 35 percent of enterprises don’t yet have a PQC budget. Two out of five respondents claimed that it would be extremely difficult to upgrade their encryption from current standards and many worried about the high cost of doing so.
These are just some of the reasons that quantum threats will likely prosper. The slow pace of PQC adoption will be the downfall of many.
Quantum computing will likely defeat much - if not most - of the modern encryption on which network computing relies in this decade. We speak, of course, of the 2048 bit RSA keys and the Elliptic Curve Cryptography that keep everything protected from the range of threats that data experiences on a day-to-day basis. That’s the opinion of the US National Institute of Science and Technology (NIST) - one of the world’s foremost authorities on the subject.
Quantum’s edge is its ability to solve multiple problems at once. Classical computers speak in bits - a series of 1s and 0s which act as its language. Quantum’s version of bits - Qubits - can be 1s and 0s too, but they can also be a third state of indeterminate value. It’s with that edge that quantum computers can solve multiple problems at once and put themselves light years ahead of classical computers, no matter how powerful.
For encryption, that means every conversation, transaction, dataset, identity, device and endpoint protected by those keys will be easy prey for a quantum ready-adversary.
Were you to throw a classical computer at a 2048 bit RSA key, for example, it would take several quadrillion years for the computer to guess every part of that key. With a scalable quantum computer, that would take a mere matter of months.
Quantum computing has hit an acceleration point in the last year. In 2019, both IBM and Google claimed that it had reached quantum supremacy with their respective quantum projects, definitively proving their superiority over classical computing. In March 2020, Honeywell announced it would be bringing “the world’s most powerful quantum computer” to market in the near future. In the meantime, funding has poured into the quantum field and public interest has spiked. Quantum has just come over the horizon.
The availability of commercial quantum computing is, by many estimates, a ways off. In 2015, the European Telecommunications Standards Institute predicted commercial quantum computing would arrive within 10 years. Five years later, widespread quantum computing still seems five to ten years off according to DigiCert and ISARA’s research. That might allay some people’s fears about their ability to prepare for quantum - after all, a decade is a long time and more than enough to prepare for just about anything. Right?
A few human years is more like a few days in cryptography. Even if it does take a decade for quantum to pose a widespread threat to data protection, that will still be far too short for many. Think of the IoT devices manufactured today and expected to still be in use five, ten or many more years from now. These could include automobiles, transportation systems, medical devices, industrial systems, 5G deployments, smart grids and so on.
The difference between the moment that new cryptography is needed and the moment enterprises adopt it on a large scale stretches far wider than anyone should be proud of. We call that Cryptosloth. The time between those two points is boom years for cybercriminals.
The history of cybersecurity is littered with just such examples. The Diffie Hellman key exchange was invented in the mid-1970s. While it is now a central part of modern cryptography, computational power could not accommodate it for decades after its inception. Even after it was possible, Elliptic Curve cryptography took years to be widely adopted.
EternalBlue - A Windows vulnerability - is a further example of that sloth. When WannaCry hit in May 2017, it used EternalBlue to launch a global cyberattack - one of the largest ever recorded. In June, NotPetya spread around the world, causing similar havoc. The tragedy was that Microsoft had released a widely available patch to EternalBlue earlier in the year. It could only do as much damage as it did because many had ignored patching advice. EternalBlue continues to threaten Windows machines today for the same reason.
If organisations have had such a problem simply patching - then implementing PQC will be considerably harder. That instinct to ignore the problems or delay the solutions, or relax because the threat seems years away, will only exacerbate that sloth and the inevitable threat.
When it comes to quantum, fighting crypto sloth is about more than just quickly adapting the PQC algorithms needed to head off quantum threats. It’s about preparing your environment to be crypto agile. Quantum threats will likely need a variety of cryptosystems and keys in order to resist the threats. Enterprise PKIs and those designed to secure IoT devices will need to be able to quickly switch between different algorithms on the fly.
Companies need to deploy crypto agility to ensure they can replace mass quantities of cryptography and digital certificates should the need arise. The organisations that are preparing now are getting to know their own environments, gathering intelligence and understanding how they already use encryption. They can then move to automate much of their cryptographic activity by creating systems to manage keys as well as discover, remediate, revoke, renew and reissue certificates.
One thing is for sure, the quantum race has already begun. Microsoft, Google and IBM, as well as the governments of the world, have already started accelerating down the admittedly long road to quantum supremacy. Venture Capital funding is pouring into quantum projects and Gartner says that 20 percent of all companies will be investing in quantum in the next five years. Cybercriminals are likely just as excited and Quantum threats will be with us sooner than many may expect. If organisations want to stay ahead of threats they need to take their first steps towards crypto-agility.
The term ‘customer experience’ has risen to prominence with the digitisation of services, particularly in the increasingly competitive and complex banking and telecommunications sectors. But in reality, customer experience is much more than a strategy to navigate a competitive landscape and digital disruption. It has to be the absolute central cog in a business, with everything else – from product sets, to go-to-market strategies – designed around it.
By Lee James, CTO, EMEA at Rackspace.
This notion of customer centricity harks back to the spirit of traditional shop fronts and market stalls, where the guiding principle was about identifying a customer need and fulfilling it. In today’s age of metrics and customer relationship management (CRM) systems, it’s easy to become focused on qualifying absolutely everything when it comes to customer experience. But this shouldn’t be the case.
Antiquing to IT
Growing up working alongside my parents on their antiques stall, it was about having a variety of choice for customers – from wardrobes to grandfather clocks. Some customers would come knowing exactly what they wanted, but many had nothing specific in mind. Over the years I’ve realised that many of these early learnings also apply to IT services. With a diverse range of challenges and opportunities facing businesses today, customer experience in IT services is about listening and identifying a customer need, then having the breadth of offering to design a tailored solution.
Here are three timeless winning approaches to customer experience.
Working with the customer as a ‘person’ could be as simple as offering a gesture of goodwill. This can go a long way in terms of building a relationship that is friendly, trusting, and transparent. An excellent example of this was shown by a Rackspace employee when conducting updates to a customer’s IT system. Though this unexpected maintenance meant that the customer would miss their family dinner, this was a situation quickly rectified by the Racker who sent a pizza delivery directly to the customer’s desk whilst they waited. This small gesture demonstrates how organisations can easily step away from their corporate ‘front’ to conduct a meaningful interaction.
People buy from people. The traditional approach to customer experience – connecting and listening to people – still wins out. When we talk about customer experience, we should remember that is not a strategy to be implemented or a business target to hit, but a meaningful way of connecting with those around us.
Following on from our highly successful coverage of the healthcare sector in the April issue of Digitalisation World, the May DW includes a major focus on the technology challenges and opportunities as the coronavirus continues to cause major disruption to almost every aspect of our lives. The feature runs throughout the magazine, and includes a range of articles and news updates. Part 8.
As a result of the COVID-19 pandemic, we are witnessing an unprecedented increase in home working, which requires remote access for tools and communications to conduct our daily jobs. This disruption is putting IT infrastructures at risk, while validating much of the industry’s investment in business continuity, resilience, scalability, accessibility, data protection and security.
By Brian Ussher, President & Co-founder, iland.
With a global at-home workforce now entirely in place, what can IT professionals and CIOs do to ensure their private and public clouds can keep up and remain safe? And what steps and tests should they take to support a protracted change in the way we work? According to a recent Gartner survey, more than 74 percent of CFOs and business finance leaders expect at least five percent of their workforce will never return to their usual office workspace — becoming permanent work-from-home employees after the pandemic ends.
Even in the face of a global pandemic, we continue to promote a culture that requires easy and instant access to our tools, information and each other over cloud collaboration tools like Slack, Google Drive, Office 365, Microsoft Teams, as well as in-house applications.
This demand on IT requires private, public and hybrid clouds to have the agility, scalability and security to support entire workforces no matter where they are. IT leaders who have planned for this worst-case scenario are ready to scale at a moment’s notice. Likewise, they’ve already considered the impact on licensing, vulnerability and added traffic from employees working at home over personal devices and unsecured networks.
IT professionals who support an at-home workforce need to understand the difference between employees “running” applications and “accessing” applications. When technology is set up and configured correctly, it should be easy to access. That's the whole idea of SaaS and cloud. The challenge is, how do you administer it? How do you run it?
Organisations that maintain private clouds onsite, which might not be accessible during stay-at-home orders, need a plan to make repairs physically — like swapping hard drives, replacing switches or cables — when their employees are home.
Likewise, whether at home or work, the end-user experience should be the same. If all apps and tools are optimal in an office environment, how do you make those adjustments ahead of time, so remote employees still have the same access and capabilities as if they’re working in the office? And how do you maintain your security and IT compliance obligations?
Where and how to start?
The easiest advice might be to avoid trying to boil the ocean all at once. If your applications and data aren’t on the cloud already, it’s possible to mobilise secure VPNs and encrypt applications for mobile devices. If you’re on the cloud already, you’re several steps ahead of others. But you still need to work with your cloud service provider to review your workloads, applications, and data requirements.
At the same time you’re focusing on accessibility, remember to address your vulnerabilities. Right now, cybercriminals are stepping up their attacks to take advantage of remote employees. Phishing attacks are at an all-time high on small and large businesses, as well as public resources like hospitals and healthcare providers.
Now’s the time to reinforce your organisation’s IT security and compliance guidelines, many of which include the relevance of when employees travel or occasionally work from home. This includes a refresher on password policies and how to identify and report phishing attempts. Help employees with securing their home networks, and all the other policies and guidelines they would typically follow at work to protect your company and customer data. This might also be an excellent time to train employees on document and data retention best practices.
COVID-19 will create additional security threats as attackers attempt to take advantage of employees spending more time online while at home and working in unfamiliar circumstances. Some of the biggest threats associated with the pandemic include phishing emails, spear phishing attachments, cybercriminals masquerading fake VPNs, remote meeting software and mobile apps.
Above all, you must have the same level of resilience and redundancy plans in place for home working as you do for onsite, even if you are 100 percent in the cloud. It is important to recognise that the same problems that happen on a day-to-day basis when you're in the office can also occur when the office is vacant.
Prepare for the new normal
Going forward, all businesses should plan for an eventuality like COVID-19 happening again. This means understanding data security, business continuity, resilience, scalability, accessibility and so much more. For example, you may not need extra capacity and compute power now; but you need to know that within minutes you can get to that number. And, as I mentioned earlier, a lot of organisations have internal-only networks to manage power supply, fans, cooling and switches. What if you can't get into the building?
Futureproof and understand the boundaries between personal and company devices and assets. Understand what you need to put into place to protect your business and your employees.
And finally, companies that are leveraging cloud services need to communicate frequently with their providers to address future needs and concerns. Make sure you know what they can do ahead of time to keep your remote workforce operating. Hopefully, these circumstances will be short-term, and life will return to some normality soon, but my advice is to always plan for every eventuality and what may now be the new normal.
Mobile revenue to drop 4.1 percent worldwide; regional impact to vary.
Mobile services represent critical infrastructure that’s allowing people to stay connected during the coronavirus crisis. However, that doesn’t mean these services are immune to the pandemic’s economic shock, with 2020 market revenue now expected to come in about $51 billion short of the previous forecast, according to Omdia.
Worldwide mobile communications services market revenue will total $749.7 billion this year, down from the prior forecast of $800.3 billion. This compares to $781.5 billion in 2019. Annual revenue will fall by 4.1 percent this year, with the decline amounting to $31.8 billion.
“Mobile phone companies around the world are experiencing usage spikes as more countries encourage or enforce social distancing and work-from-home rules to slow the spread of the COVID-19,” said Mike Roberts, research director at Omdia. “However, the spikes aren’t enough to overcome the impact of the pandemic on consumer behaviour. These rules are having a dramatic impact on various regions of the world, halting new subscriptions and upgrades in the United States, while slashing revenue for operators in Europe.
Consumer uptake of 5G will be slower than previously forecasted, due to the economic situation as well as the possibility of delays in 5G network deployment and in the availability of 5G devices. Omdia will release more details on 5G shortly. 5G worldwide subscriptions will be down 22.1 percent versus the previous forecast.
In the Americas, mobile service revenue is set to decline by 3.7 percent to $237 billion in 2020. Most of that loss will come in the United States as both net additions and upgrades to higher data plans slow or stop altogether.
Europe will suffer the largest impact of the crisis, with mobile service revenue falling 9.1 percent to $131 billion, representing a downgrade of 9.3 percent compared to Omdia’s previous forecast. This decline will be driven by significant reductions in mobile prepaid revenue and a dramatic drop in inbound roaming revenue.
Vodafone UK, for example, said mobile Internet traffic has increased by 30 percent and mobile voice traffic by 42 percent due to the crisis. At the same time, mobile service providers are seeing new business grind to a halt as retail stores close and consumers stop buying new phones as job losses mount. One example of this widespread trend is AT&T, which is closing 40 percent of its retail stores in the United States.
The Middle East and Africa will see a 3.9 percent decline in mobile service revenues to $84 billion, representing a downgrade of 8.4 percent from Omdia’s previous forecast. Major factors for the decline include the impact of low oil prices on Gulf economies and the fragility of economies and health care systems in parts of Africa.
The high-income Gulf countries have been early movers with 5G in the Middle East and globally, having all launched commercial 5G services in the second half of 2019. However, the economic impact of the crisis is likely to hit consumer confidence and appetite for expensive 5G devices, and mobile 5G subscriptions in the Middle East will be significantly lower at the end of 2020 than expected previously.
Even before the COVID-19 crisis, the number of mobile 5G subscriptions in Africa was expected to be very small at end-2020. Now it will be even lower as a result of the economic consequences of the pandemic and likely disruption to 5G network deployment plans.
While the impact of the coronavirus on the mobile market is significant in every region, it pales in comparison to the impact the crisis is having on sectors such as travel, tourism, hospitality and retail, which have suffered partial or complete shutdowns. The International Monetary Fund now expects the global economy to contract by 3 percent in 2020, according to its latest World Economic Outlook, which was released earlier this month.
“The massive contraction will clearly impact every segment of the economy, including mobile, but how long it will last in each country and region is virtually impossible to predict,” Roberts said. “One bright spot is that in China, the first country hit by the pandemic, there are signs that the mobile market and broader economy is starting to come back to life.”
Given the high level of economic and commercial uncertainty created by the COVID-19 pandemic, Omdia will be producing a full revision of its global mobile forecasts next quarter.
Now more than ever, organisations are looking to artificial intelligence (AI) and in particular machine learning (ML) to solve complex data challenges and bring new insights and value to an ever-increasing volume of information stored within our business.
By Glyn Bowden, SNIA Cloud Storage Technologies Initiative Member;
Chief Architect, AI & Data Science Practice at Hewlett Packard Enterprise.
The emergence of the data scientist as a mainstream profession within any sized organisation rather than focused within high finance, research institutes or governments demonstrates how quickly the adoption has been. However, with anything moving at this sort of pace, it has been difficult to take time and assess just what impact this new wave of analytics is having on our infrastructures and more specifically, the storage estate where a majority of this data is currently residing.
Are AI and ML the next step along the evolutionary path following data marts and big data? So what is the difference here, just the scale? The answer is no. There is now a very different storage challenge today that we need to deal with. It has to do with the way data is used. Traditionally storage has a single use or at least a single performance profile at a specific stage in its lifecycle. We know, for example, that recently created data is typically considered “hot” as it is accessed most frequently. Then the data cools over time as it becomes less relevant until it is either archived to slow media or expired all together. This means there was a focus on data lifecycle management and hierarchical storage architectures. In this traditional architecture, the data moves between tiers so that it is on high performance media when fresh and active, and slower bulk media when cold. However, with the new techniques of AI and ML, data can have many uses at any time. That means we will never be able to effectively plan where that data needs to sit from a tiering perspective.
Also, if you look at how certain data is stored, it could be in any number of formats. For example, it could be unstructured files, Blobs in an object store or data in a SQL database on a LUN somewhere. If we suddenly decide that one of those data sources is now critical to building the desired model, the demand dynamic on that data will change. For training models in ML this places a heavy read demand on the data used. For example, in the case of supervised learning the data is parsed multiple times across the validation and testing phases. The pattern of read I/O is also somewhat difficult to estimate as data may be ordered in ways it wasn’t traditionally used meaning indexes on databases. Even random block access can increase latency and impact performance. Don’t forget all this can be happening whilst the media is still being used for the day job of the data connected to other business systems, therefore impacting performance of other business critical systems.
Not only do we need to consider the archives and pools of data within the organisation, we need to look at data that is being captured and what our new processes mean. Before ML models are applied against an incoming data source, very often that data needs to be transformed in some way, so that the fields and schema match the trained models’ expectations and format. It will also likely be filtered in some way, particularly if the incoming data feed is very verbose and contains features or records that might not be relevant to the model. If these features and records are included then they can overwhelm the infrastructure that provides the inference service. They could also cause additional latency or increase resource requirements unnecessarily. This is often referred to as pre-processing or pre-engineering. What comes out the other end will be cleaned and transformed data sets that fit for inference. Again, this has the potential to be very different from the original incoming data, so its original use will still need to be serviced. This requirement could mean the need to fork the data pipeline in some way, so the original data carries on its previous path and the new fork passes through the cleaning and transformation process on its way to inference. Then question would then be, is there value in storing both?
As you can see the profiles of the data change drastically from the original scenario and we have to review both the performance requirements for data in-flight and at-rest, as well as the capacity of data stores to cope with potentially different formats or schemas of that data.
Of course, at the scales we are seeing emerge, this is not practical. We need to start thinking about storage systems and data architectures in a new and unified way. We need to accept the fact that data will have multiple purposes, often unknown at the time of collection and due to the inherent potential value, we will be keeping much more of it around.
The advent of machine learning impacts more than just the active data pools and pipelines too. There is now more need for careful configuration control on our data transformation services and model management systems. We need to ensure that everything stays in sync and if a change is made to a model that requires upstream changes to the data, then the transformation needs to reflect that in the live data pipeline as well. The results, otherwise, would be an inference model that no longer has the features it’s expecting and would generate poor results, often not identified or noticeable for considerable periods of time.
One final thought is that with all the changes I’ve mentioned above, I’ve also alluded to the dependency that datasets already have on them from existing business systems. This dependency will not change and will drive whether a migration or transformation is appropriate or not. Therefore, we also need to ensure we have mechanisms that allow us to discover and connect to the discreet existing data sources around the organisation. This will allow us to augment the data into our new pipelines and data ecosystem without the need to disrupt their current role. AI and ML present great opportunities to organisations if harnessed correctly but can also provide a significant challenge if the impact is not well understood and catered for.
About CSTI
The SNIA Cloud Storage Technologies Initiative (CSTI) is committed to the adoption, growth and standardization of storage in cloud infrastructures, including its data services, orchestration and management, and the promotion of portability of data in multi-cloud environments. To learn more about the CSTI’s activities and how you can join, visit snia.org/cloud.
It’s becoming more and more challenging to visualise the amount of data that we’re amassing in the present day. Visual Capitalist has suggested that we’ll reach 44 zettabytes in 2020 – that would mean that there are 40 times more bytes than there are stars in the observable universe.
By Helena Schwenk, Market Intelligence Lead, Exasol.
Given that so much of this data is irreversibly intertwined with business, organisations need to be able to wield it effectively. This has led to a growing number of Chief Data Officers (CDOs) that are taking charge and helping to develop their organisation’s understanding and usage of data.
Despite this, many organisations are not yet scratching the surface of what they could – and should – be able to achieve with their data. According to recent research from Exasol, only 32% of data decision makers said that their data teams are able to extract the insights they need.
Agreeing an effective data strategy and establishing a robust data culture is imperative for future business success. Without one, data teams are going to be battling a lack of efficiency when it comes to handling and acting upon their data. This is an insurmountable obstacle in the pursuit of genuine data excellence. Fortunately, our research found that 83% of respondents believe that work is being done to establish their data-driven culture.
Consider a Data Centre of Excellence
One of the challenges to a data-driven culture can come about when a majority is dissatisfied with who is initiating data-driven strategies. 74% consider data strategies to be driven at the board level, but more than half (55%) believe that data strategies should be driven by a Data Centre of Excellence (CoE), or an Innovation Lab[HS1] .
A CoE is a team of cross-functional data specialists – scientists, engineers, architects, delivery managers, workflow integrators and analysts. For smaller organisations this won’t be quite as industrial a unit, with capable members across different teams sharing out responsibilities and contributing in balance with their day-to-day roles.
Constructing this dedicated CoE to help control how data is interpreted, directed and used makes the most of the skills within a team and acts as a single source of truth for the entire business.
Address the human side of data analytics
A business-wide mentality is imperative when it comes to data – everyone has to be on board to maximise the benefit. Organisations can make their data strategy even more effective with the democratisation of data. If every employee across an organisation is able to gather and/or analyse data with intuitive tools, then faster and better business decisions can be made by the people driving the organisation day-to-day.
A significant amount (80%) of data decision makers support this, believing that opening up access to data has a positive impact. And the CDO is perfectly placed to make this happen, recruiting ‘data citizens’ in different departments.
This puts the human at the heart of a data strategy that increases productivity and can open the door to exciting new career opportunities and progression. The objective is to open up data to be a tool used by the many, not the few.
Get your deployment model right
Data democratisation isn’t easy though, and four out of five of data decision makers said their current IT infrastructure makes it challenging. Deployment model decisions are key once a data strategy is in place. There are many different factors to consider including speed, cost, types of workload and future requirements when evaluating if on-premises or cloud is the best option.
Flexibilty is crucial and a hybrid cloud approach can often be the best model, as many organisations still require the need to manage sensitive workloads on-premises. But where cloud can really deliver is in terms of real-time delivery of a large volume of data to a large numbers of people. Supporting this, the majority of those surveyed (96%) said that they believe a cloud model could make it easier to democratise their data. And of those who have already moved workloads to the cloud 73% said it has made a positive impact regarding what they can do with data.
One success story to demonstrate this is Revolut, the fastest growing fintech in the world and a European unicorn company.
A cloud Revolut-ion
The overwhelming success of the organisation has led to explosive growth, with data volumes increasing 2000% in a 12 months. Such development made it unsustainable to continue managing data using existing operational databases in the long term, with some data queries taking hours to run.
The deployment of Exasol’s high-performance in-memory analytics database running on Google Cloud Platform was chosen to combat this. Transitioning to a cloud-based model reduced SQL query times from hours to seconds, with relevant dashboards available to every employee. Better and faster insights and true data democratisation means performing tasks such as checking funds in bank accounts to targeting deals through user segmentation is now optimised.
Achieving data excellence
No matter what data strategy an organisation has, speed and performance are fundamental. Once you can guarantee everyone in your organisation has the power to access and analyse data quickly, you’ll be able to create business value more effectively than ever before.
Following on from our highly successful coverage of the healthcare sector in the April issue of Digitalisation World, the May DW includes a major focus on the technology challenges and opportunities as the coronavirus continues to cause major disruption to almost every aspect of our lives. The feature runs throughout the magazine, and includes a range of articles and news updates. Part 9.
Worcestershire County Council has launched five new apps as part of its “Here 2 Help” COVID-19 response program. The apps were designed, developed, and deployed in just two weeks using the OutSystems low-code app development platform.
“We’ve been able to get increasingly critical apps related to our COVID-19 response deployed in record time - some in under 24 hours,” said Jo Hilditch, Digital Delivery Team Manager at Worcestershire County Council. “This has enabled us to help both vulnerable individuals across the County and our teams get the help they need in this unprecedented situation. Our community response app, in particular, has been very popular and we’ve currently received 1,445 volunteer offers and 1,623 requests for support. We’ve worked with OutSystems for a number of years, and its platform has allowed our developers to work in an incredibly agile way.”
Designed to allow Worcestershire County Council’s teams to respond more effectively to the ever-changing situation, the five apps rapidly developed by the Digital Delivery Team are:
The Digital Delivery Team has also developed an additional two apps to help support the Coronavirus testing program in both Worcestershire and Warwickshire:
Having worked with OutSystems for more than five years, its low-code platform, designed for the development of web and mobile applications, has already been used to develop a range of apps to support Worcestershire County Council’s digital presence, underpinned by a dedicated team of six developers in three workstreams.
In mid-March, two developers were tasked with developing a range of apps for different departments across the authority in response to the developing COVID-19 pandemic and were joined by two additional developers later in the month. Initial forms for both the internal and external apps were developed within 24 hours and then refined in order to meet the rapidly evolving needs of the teams involved, including HR, and Public Health.
Now that the apps are deployed, members of the relevant teams can update them in order to collect the information they need. For example, HR can change the skills or experience fields in its volunteer redeployment app quickly and easily, as new skills are required for contingency planning.
Willem van Enter, VP, EMEA, OutSystems, said: “Worcestershire County Council had to deliver critical apps in a very short timeframe, and our platform has helped the team to design and deploy them faster and more efficiently than ever before. This has enabled a range of teams across the authority to connect volunteers with those who need additional help, safeguard vulnerable members of staff and better plan the redeployment of employees, based on skills and experience, as the situation develops.”
Social distancing measures across facilities could also require management to look at upskilling staff to take on more responsibility.
The fallout from COVID-19 will push data centre operators to think more innovatively, particularly when it comes to reviewing how internal management and maintenance practices are carried out across the data centre floor. This is according to Chris Burden, Chief Commercial Officer, at Memset, who suggests social distancing measures within facilities could be here for some time and by upskilling staff to take on more responsibilities could provide a more skilled data centre workforce.
“The coronavirus pandemic has highlighted the important role data centres and cloud computing plays when it comes to supporting businesses and institutions alike.
“Many organisations greatly accelerated their digital transformation aspirations, moving swiftly to online services as the crisis took hold. This has meant the data centre and cloud industry is likely to remain on solid footing as we ride out the remainder of the pandemic and beyond. But this should not stop management teams from reviewing their operational practices in order to better support customers and ensure resilience and availability.
“The first order of business will be a review of all business continuity planning practices. All facilities should have up to date plans in place which are frequently tested against things such as outages, flooding and other forms of natural disaster. According to the Government’s National Risk Register, human disease also ranks highly, yet it will be interesting to see how many organisations actually had planned for an event such as COVID-19.”
Burden continues, “Despite data centre staff being classed as essential workers, social distancing measures are likely to be enforced for a considerable time, limiting the presence of staff on the data centre floor. Not only would this account for site staff but would also limit access for third party suppliers. Getting suppliers into a facility may become much more controlled, meaning data centre management teams will need to think more strategically to how they can get more out of their on-site staff.
“Management will need to put in place plans to manage this on a long-term basis and one option could be to upskill staff to take on more responsibility when it comes to the maintenance and management of its facilities. We would not be surprised to see operators allocating greater resources to training and development to support this, enabling core staff to take on a bigger role than they might have previously had.
“Ultimately, COVID-19 has brought about profound, irreversible changes to the world. While many saw digital transformation as something of a luxury, the speed in which projects are being commissioned means we are now actually seeing the true transformation of digital services within organisations. The data centre needs to stay nimble and key to that will come with ensuring staff within facilities are equipped with knowledge, skills and training to take on more responsibility,” Burden concluded.
Non-profit goes virtual to continue delivering critical services to families and children across, Greater Manchester, United Kingdom in face of COVID-19.
These are anxious times – particularly for families struggling with housing, money, parenting, work and health-related issues. And the services that Positive Steps, a United Kingdom-based non-profit, provides to support them are critical. When COVID-19 struck and remote work became a mandate, the organisation had to figure out a way to continue operating without missing a beat. And it found one in Citrix Systems, Inc. (NASDAQ:CTXS). Leveraging the company’s digital workspace solutions, Positive Steps was able to quickly provide access to all of the tools and applications its employees need to work from home and deliver the usual support the organisation is known for in a highly unusual environment.
Adapting to the Times
“We understand this is a worrying time, but we have adapted and are now working in a more flexible way that allows us to provide support to the community in these rapidly changing and difficult times,” said Garry O'Driscoll, Senior Infrastructure Analyst, Corporate Services, Positive Steps. “Whilst we are undertaking face-to-face contact with our clients only where this is absolutely essential, we are still delivering our full range of services using video interviews, online resources, chat facilities and more.” And it’s doing so through Citrix® Workspace™ .
With Citrix Workspace, companies can give employees access to all of the SaaS, web and mobile apps they prefer to use in one unified experience and IT teams a single control plane through which they can onboard and manage application performance without getting in the way. Over the past six months, Positive Steps has been provisioning virtual desktops and has deployed 150 Chromebooks so that its staff can access their Citrix workspaces while out in the community and at home.
Driving Business as Usual
“When we decided three weeks ago to close our office, it was a seamless transition with no problems at all,” Garry O’Driscoll said. “I’m certain many IT departments are in dire straits following this unexpected pandemic, but thanks to our access to Citrix, we were prepared and it’s business as usual.”
Citrix provides a complete range of digital workspace solutions that empower employees to do their very best work in a safe and secure manner anywhere, anytime, using any device. Click here to learn more about these solutions and how your organization can use them to gain the agility, speed and efficiency required to manage resources in the dynamic way that unpredictable environments demand and position your business for future success.
Atos reveals that one of the most advanced supercomputers in the world, the powerful BullSequana X1000 installed at The Science and Technology Facilities Council (STFC) Hartree Centre, is providing supercomputing power to assist in global computational drug discovery efforts to help combat COVID-19.
The Hartree Centre team is working closely with Washington University School of Medicine who lead the Folding@home project, which allows a global community of contributors to lend unused background capacity on their personal computers to power simulations of target drug interactions. While there is plenty of compute power available to run these simulations, creating the drug structures to be simulated uses complex and memory-intensive methods that requires supercomputers. Creating these drug structures has therefore become the bottleneck in using the vast amount of compute power available across Folding@home.
By using some of the capability of the Hartree Centre’s Atos BullSequana X1000, the team are accelerating this process and creating new drug structures to be simulated fully across Folding@home’s distributed compute power.
The Atos BullSequana X1000 systems at Hartree are also being used to support the work of CompBioMed, the European Centre of Excellence in Computational Biomedicine, as part of a global effort which includes hundreds of researchers from across the US and Europe to tackle different aspects of Covid-19. As an interim measure before a vaccine can be produced, pharmaceuticals are needed that can reduce the severity of the disease or that can be used as a preventive measure. This requires thousands of compounds to be screened in the form of advanced simulations, demanding high levels of compute power. The Hartree Centre systems are being used as part of an exceptional array of supercomputers across the world that are being harnessed to undertake these simulations.
Alison Kennedy, Director of the STFC Hartree Centre, said: “We have a hugely powerful supercomputing capability at our disposal here at the Hartree Centre, so our staff were naturally looking for opportunities to contribute to global computational efforts to tackle the COVID-19 pandemic. The way the folding@home project works is to take a possible compound and use computer simulations to see how it interacts with the virus. It’s not a way to provide a vaccine, but if suitable antiviral compounds are identified, it could help to treat patients who have contracted the virus, which could help them to get better more quickly and reduce the burden on critical healthcare services.”
The team hopes to identify antiviral therapeutics that disrupt one or more of the proteins necessary for the lifecycle of COVID-19, which would help to prevent the further spread of the virus.
Andy Grant, Global VP, Large Strategic HPC Deals, Atos, added: “Whether testing new compounds or performing target drug simulations at speed, analytics supported by super computers are uniquely placed to aid in the search for potential treatments of COVID-19. The UK has consistently been at the forefront of science and medicine and it is pleasing to see this country playing a key role in what has become an enormous coordinated international endeavour.”
Atos works with the Hartree Centre, located at Daresbury Laboratory, in support of closer collaboration between academia and industry through the power of supercomputing and deep learning.
An increasing amount of organizations choose to rely on the many cloud solutions on the market today. In most cases, however, they forget to protect themselves against data loss, and the cloud giants won’t have your back when data go missing.
By Keepit’ CEO, Frederik Schouboe.
The Office suite is a crucial everyday tool for everyone, from small one-man companies to bigger enterprises. It is therefore hardly surprising that more and more organizations, big and small, choose to use Microsoft’s Office 365, Exchange Online, Sharepoint Online or OneDrive for Business as a core part of their organization.
No matter the provider, cloud solutions offer a flexible, simple and scalable platform that helps organizations: Your data are accessible from all your units, you can scale up or down quickly, easily manage access policies and rest assured that your data are safe and secure in the cloud.
… Or are they now?
You only pay the cloud providers for uptime: Access to e.g. Office 365 anywhere and whenever you need to. The most recent numbers indicate an uptime of more than 99 per cent, and even though downtime does occur occasionally, it is rare and in most cases very short.
But you alone are responsible for your data.
If Microsoft experiences a major breakdown, if one of your employees accidentally deletes critical data, or you become the target of a ransomware attack against your cloud data, the major providers will only be of very limited assistance to you.
This is a surprise to many customers, as you would logically assume that people who store data for you also take care of them.
No one to call
During this year’s Gartner Symposium, Maersk CISO Andy Powell talked about how the organization managed to get their (server) fleet afloat again after 2016’s devastating Not Petya attack. He talked about being in direct contact with Microsoft’s COO in his attempt to solve the problems caused by the attack, which hit not only Maersk but also the international community.
Maersk deserves a lot of praise for their excellent handling of the Petya attack – both in terms of re-establishment but also because of their communication with stakeholders and the rest of the world. It is far from everyone who gets a direct line to Microsoft’s top people to help mitigate attacks, however, and Andy Powell also expressed his lack of confidence in general cloud service security.
According to an IDA survey, two out of three organizations have experienced cyberattacks – and every eighth attempt is, on average, successful. This is scary information in light of increased interest in cloud solutions among cybercriminals.
You have very limited protection: Everything that is deleted in Office 365 ends up in the well-known trash bin, but only for 30 days. After this, the file disappears forever, with disastrous consequences in cases of accidental deletions or GDPR. Among some of Keepit’s municipal customers, schoolteachers have lost several years of preparation material by mistake due to this particular limitation.
Microsoft promises uptime, not restful sleep
Going back to this text’s headline, cloud solutions are generally a great invention and we also use our share of various cloud services.
The problem arises, understandably, when you put blind faith in your data being safe and secure in the different clouds.
They are not.
Additionally, cloud solutions are nothing but accessible software on a service. They also contain bugs and weaknesses which, when found, can be exploited by cybercriminals or cause downtime for both the cloud provider and your organization.
You should, therefore, review all of your contracts with your cloud providers and check your level of security. You should also double-check your existing backup solution to see if you might be able to build up a regular backup habit with your existing arrangement. Maersk CISO Andy Powell recommends that you always run an off-site backup and that you should optimally have three copies (the so-called 3-2-1 strategy) with two local backups and one off-site.
There is a multitude of different ways to create backups, and there is only one primary rule: Even the worst backup solution is better than no backup at all.
With the coronavirus outbreak continuing to gather pace, many businesses around the world are having to provide remote access for all their employees, sometimes for the first time. This is being done under pressure to tight time constraints. It’s difficult for anyone to do their best work with deadlines looming, and worries and concerns about the current situation playing on their mind. IT staff are no different.
By Alan Stewart–Brown, VP of EMEA at Opengear.
Mistakes and misconfigurations are inevitable and that will potentially give hackers opportunities to exploit. At the same time, with network under growing strain from increased traffic and surges in demand, the potential for outages to occur is also increasing. In this and other crisis scenarios from cyber-attacks to winter storms and natural disasters, there is therefore a higher premium than ever on secure remote access and network resilience - and business continuity is becoming even more vital.
At the same time, if outages do happen in such crises, businesses may find getting the network up and running even more complex. With travel restricted or impossible, sending engineers out to remote sites to address downtime issues and resolve network faults may risk compromising their health and safety and therefore not be in any sense realistic.
For every organisation operating today, keeping the business up and running is likely to be a key concern and the need for network resilience has risen in line with this. When disruption occurs, companies need to be prepared. They need a plan that enables them to recover quickly. The current crisis may have focused minds within networking teams and senior leadership to carry out risk analysis and put measures in place to reduce those risks. But what is clearly required is a new approach that goes beyond simply adding redundancy or even improving uptime to add a layer of intelligence – effectively a resilience quotient to the network’s plan B.
That is because for organisations that need to ensure business continuity today, network resilience is key. Network resilience is the ability to withstand and recover from a disruption of service.[1] One way of measuring it is how quickly the business can get up and running again at normal capacity following an outage.
True network resilience is not just about providing resilience to a single piece of equipment whether that be a router or a core switch for example; in a global economy it is important (especially given today’s circumstances) that any such solution can plug into all of the equipment at a data centre or edge site, map it and establish what is online and offline at any given time and importantly wherever in the world it is located.
That enables a system reboot to be quickly carried out remotely. That’s hugely beneficial at all times but especially at the moment where engineers and other workers are often unable to travel to either the data centre or edge location because of lockdowns and everything has to be done from afar. This is a scenario that looks likely to get more severe – in the short-term at least. We are already seeing interconnection providers starting to restrict access to sites with Equinix a case in point.
Alternative arrangements
If the remote reboot does not work, of course, it might well be that an issue with a software update is the root of the problem. With the latest smart out-of-band devices this can be readily addressed, because an image of the core equipment and its configuration can be retained, and the device rebuilt remotely without the need for sending somebody on site. In the event of an outage, it is therefore possible to deliver network resilience via failover to cellular, while the original fault is being remotely addressed, enabling the business to keep running even while the primary network is down.
Building in resiliency through the OOB approach does cost money, of course, but it also pays for itself: certainly over the long-term and often also in just a one-off instance, depending on the outage and associated costs. You might only use this resiliency a couple of times a year, say – but when you need it, you really need it. Indeed, given the current situation, the cost of network resilience is a small price to pay for business continuity. OOB supports easier provisioning of new remote sites to flex and grow the network as well as fast speed of response. It is about insurance, but also remediation and maintenance.
Why prevention is better than cure
It is worth highlighting that time is critical in these scenarios. When network outages occur, the damage is cumulative so businesses need to pre-plan and ensure that they are putting in place network resilience as a preventative rather than a reactive approach. Often today the issue is not fully considered upfront. Organisations often defer discussions around network resilience based on the optimistic hope that a network outage never happens to them. In fact, network resilience should be built into the network from the outset. It should be a tick box exercise but typically it is not. Organisations generally either think that their network is somewhat resilient through the in band path or they are not thinking about their branches or remote sites as much as they should.
Of course anyone that has just suffered a network outage will understand the benefits of out of band (OOB), as a way of keeping their business running in what is effectively an emergency but as referenced above it is likely to be much better to plan for resilience from the word go. After all networks are the ‘backbone’ to almost every organisation today, and many businesses will benefit from bringing network resilience into the heart of their approach from the outset.
Following on from our highly successful coverage of the healthcare sector in the April issue of Digitalisation World, the May DW includes a major focus on the technology challenges and opportunities as the coronavirus continues to cause major disruption to almost every aspect of our lives. The feature runs throughout the magazine, and includes a range of articles and news updates. Part 10.
It is no exaggeration to say that the COVID-19 pandemic will undoubtedly affect every business in every industry, in some way.
By Gavin Jackson, Senior Vice President and Managing Director EMEA, UiPath.
Company executives find themselves under significant pressure to navigate the short-term impacts of slowing growth - and a how-to guide for surviving this is yet to be written. Many urgent decisions needed to be made to ensure reductions - whether that be permanent or temporary - affecting staffing levels, discretionary spend controls, and scaled production in order to keep businesses afloat.
In some cases, unnatural spikes in the volume of transactional work can cause these pressures to be exacerbated, particularly when it comes to increasing insurance claims, cancelled appointment correspondence or rising contact-centre calls. Navigating the problem, with a slimmed down workforce, stalled investments and slower production, is becoming increasingly challenging for business executives. So how can this be overcome, particularly when some industries find their workload dramatically increasing and less staff to handle it?
There is no time to ponder
Almost one year ago, I wrote an article that spoke of UiPath as a Dreamy Business, with almost unlimited upside. The conviction for that statement was born from the steadfast belief that hyper-automation will unleash the creativity and potential that was often locked up in teams, restrained by mundane and repetitive tasks. In short, hyperautomation accelerates human potential.
But of course, as has been proven time and again…necessity is the mother of all invention. We are at a unique point, in that the rules of time itself have changed. We don’t have the time to take our time.
Nations are building industries in front of our eyes. They are building hospitals within 10 days. They are reinventing production lines to roll out critical components, such as ventilators and personal protective equipment. Necessity is also driving our hospitals to automate clerical work to put extra capacity back into care. It’s driving the systematic organisation of testing for the virus and updating patient triage. It’s dealing with the wave of benefits claims. Of business loans. Of repatriation of citizens and the abundance of screening and onboarding volunteer workforce. It’s humanity at it’s best. And it’s accelerated.
Mitigating the coronavirus threat by unlocking the door to automation
The urgency of now is understood by UiPath, and our sense of responsibility to help has bound all of us.
With that in mind, UiPath has been providing our end-to-end hyperautomation platform free of charge to specifically help the front line of the fight against the virus. For the Mater Misericordiae University Hospital in Dublin, we have extended our vision of a robot (digital assistant) for every person, to the specific requirements of a “robot for every nurse”, which is reducing the time the nursing team spends on clerical work by 30% each day; 30% more nursing capacity right now feels like a blow for the virus. Software robots are now logging into the laboratory system, applying relevant disease codes to tests, and inputting the information back into relevant systems and reports. Several additional healthcare institutions from around the world are also directly seeking our support and we are doing our best to step up.
We have also made available the free-of-charge UiPath Health Screening Bot, launched in several Asia-Pacific countries and being rolled out in Europe, Middle-East and Africa too. It keeps track of the employees’ health whether they work from the office or from home. Instead of putting HR people under a lot of strain, the robot sends out a survey and keeps track of who has and hasn’t completed it. Also, it proactively follows up every hour via various communication tools (Slack, WeChat, WhatsApp) to those who haven’t made the check. The Health Screening Bot organises all the data in a summary report at the end of day. The data summaries are presented in an easy-to-read visual such as pie and bar charts — making it easy to see where employees are working (at home or in the office) and their temperatures.
The airline industry has been grounded and with it, a massive number of requests for cancelations has been hitting contact centres. One of our European airline customers is using UiPath to allow agents to pull data faster by using attended automations and combining it with back-office automations to streamline the customer experience.
For one of the biggest online US retailers, ten unattended robots help with the selection process as they needed to add 100,000 full-time and part-time positions in warehouse and delivery roles to keep up with the sudden increase in online shopping and delivery. They have been processing 800,000 to one million job applications under time pressure and UiPath is automating much of the process and reducing the time to resolve screening and onboarding.
The tables are turning
Although it doesn’t seem like it now, there will be a time, after the virus has been defeated, that the world will turn to a ‘new state of normal’, and the technology we have depended on will weave its way into our everyday lives, forming a critical part of the new-stack of productivity tools and accelerating human achievement.
Now is proving to be the time to strengthen the human-machine collaboration, helping businesses to get through these unprecedented times, while adding capacity to the workforce and ensuring business continuity of systems, processes and tasks.
Hyperautomation is certainly alleviating the impact COVID-19 is having on businesses, and will go some way to safeguarding companies in the future, too.
Business executives should use this time to build the workforce and the workspace of the future that will emerge through this crisis, as there’s no doubt industries will develop a greater dependency on automation and innovation. The future workforce is one that is augmented with digital assistants and a workspace that is largely automated, working alongside humans who shape the future
The COVID-19 pandemic has forced businesses into operating under a “new norm” where the working from home (WFH) model has quickly become the recommended and preferred approach. Indeed, the COVID-19 pandemic is forcing organizations to transform how they conduct business, albeit in a very rapid way.
By Shehzad Merchant, Chief Technology Officer at Gigamon.
Digital transformation initiatives that were already in motion are being rapidly accelerated to accommodate the new norm. In the face of such a dramatic shift, already stretched IT and Infosec teams are being placed under considerable amounts of pressure to manage, monitor and secure their infrastructure, data and applications, ensuring business performance and productivity is not impacted.
Making matters worse is the fact that bad actors are fully taking advantage of the situation by preying on unsuspecting user’s need for information, their fears and emotions, to rapidly stand up phishing campaigns, malicious websites and attachments that ultimately aim to compromise the user’s systems. The end game in many cases being credential compromise. Many organizations accord more trust to users on the Intranet versus users on the Internet. Consequently users working from home, unknowingly browsing potentially malicious websites, clicking on doctored COVID maps which download malware as an example, are using those very laptops and systems to VPN into the corporate network and from there are granted a much wider degree of latitude in terms of their access to different resources. Once a user’s credentials are compromised this implicit trust associated with a user’s locality of access from the “Intranet” can then be take advantage of to spread malware laterally within the organization leading significant impact. It is clear, therefore, that it’s no longer possible to tackle security with a dual, internet versus intranet approach, where assets within the network perimeter are considered safe.
A good way to navigate this minefield and secure an organisation is to assume that everything is suspect and adopt a Zero Trust approach. Zero Trust aims to eliminate implicit trust associated with the locality of user access, for example users on the Intranet versus the Internet, and moves the focus of security to applications, devices, and users.
Here are a few key points to bear in mind when embarking on a Zero Trust journey:
Zero Trust is a journey, not a product
What’s truly important to understand about Zero Trust is that it isn’t a product or a tool. Zero Trust is a framework, an approach to managing IT and network operations that helps drive protection and prevent security breaches. Zero Trust aims to have a consistent approach to security, independent of whether a user is accessing data and applications from the Intranet or the Internet. In striving for this, ZT actually attempts to simplify security by eliminating the need for separate frameworks, separate tools and separate policies for security based on locality of access – as an example having a dedicated VPN infrastructure for remote access. And ensuring that users have a consistent experience independent of the where they are working from. By putting the emphasis on applications, users and devices, i.e. assets, and eliminating implicit trust associated with internal networks, Zero Trust essentially aims to reduce the overhead associated with managing different security infrastructures associated today with external vs internal boundaries. Zero Trust aims to accomplish this by requiring a comprehensive policy framework for authentication and access control of all assets.
Visibility is the cornerstone for Zero Trust
The key to implementing Zero Trust is to build insight into all assets (applications, devices, users) and their interactions. This is essential in order to define and implement a comprehensive authentication and access control policy. A big challenge today that security teams face is that access control policies tend to be too loose or permissive or tied to network segments rather than assets, thereby making it easier for bad actors to move laterally within an organization. By putting the emphasis on assets and building out an asset map, policy creation and enforcement can be simplified. And because the policies are tied to assets and not network segments, the same set of policies can be used regardless of where a user is accessing data and applications from.
Discovery of assets can be achieved in many ways. One excellent approach to asset mapping and discovery is to leverage metadata that can be extracted from network traffic. Network traffic makes it possible to discover and enumerate assets that potentially may be missed through other mechanisms. Legacy applications as well as modern applications built using microservices, connected devices and users, can all be discovered through network traffic visibility, their interactions mapped, thereby facilitating building an asset map baseline. Having such a baseline is critical to building the right policy model for authentication and access control.
Encrypt Everything
While authentication and access control are essential in the world of Zero trust, so is privacy. Authentication ensures that end points of a conversation know who is at the other end. Access control ensures only the permitted assets can be accessed by the user. However, it is still possible for a bad actor to “snoop” on valid communication and through that get access to sensitive information including passwords as well as confidential data. An area of implicit trust today in many organizations is that communication on the “Intranet” tends to be in clear text for many applications. And this is a mistake. We should not assume that communications on the company’s internal network is secure simply by virtue of being the company’s network. When carrying out any transactions on the Internet we use “https” which among other things encrypts the data. Communication on the Intranet should be no different. We should work under the assumption that bad actors already have a footprint on our company’s network. Consequently, any communication between users, devices and applications should be encrypted to ensure privacy. This is yet another step to ensuring that a consistent security framework can be used for users on the Internet and on the Intranet.
Of course, encrypting all traffic on a company’s network makes it harder to troubleshoot application problems and network issues, as well as makes it harder for security teams to identify threats or malicious activity. Additionally, in specific verticals this can make compliance a challenge due to the inability to keep activity logs of specific required activity. For this reason, leveraging a network-based solution for targeted network traffic decryption may be beneficial when moving towards a model where all traffic on the Intranet is encrypted.
Implement a continuous monitoring strategy
Corporate networks are not static. They are continuously evolving with new users, devices, applications coming up, and old ones being deprecated. In these times where capacity is dynamically scaled up and down, new applications are being quickly brought to market, more IT and OT devices are coming online, the network has never been more dynamic. Cloud migration is further changing the very nature of network and the notion of what is “internal” vs “external” in a very dynamic way. Putting in place a framework for authentication, access control and encryption is half the solution. The other half is putting in place a continuous monitoring strategy to detect changes and to ensure that either the changes are compliant with the policy or the policy evolves to accommodate the changes. Monitoring network traffic provides a non-intrusive and yet reliable approach to detecting changes as well as identifying anomalies. Network-based monitoring can be used in conjunction with endpoint monitoring so as to get a more complete view. In many situations network-based monitoring can be used to surface out applications and devices where endpoint monitoring has been turned off either inadvertently or maliciously, or where endpoint monitoring cannot be implemented. Once bad actors get a footprint on a system they typically attempt to turn off or workaround endpoint monitoring agents. Monitoring network traffic provides a consistent and reliable stream of telemetry data in many of these scenarios for threat detection and compliance.
As organizations are being forced to rotate rapidly towards a work-from-home paradigm, the need to rapidly scale applications and infrastructure for this new paradigm will continue to put stress on different teams within the organization. Even as we look to the future with perhaps this pandemic passing by, some of these changes will become permanent. In other words, in many cases there may not be a “going back to how it used to be”. Embracing the move to a Zero Trust framework will help ensure that as organization transform to a new normal, security continues to keep pace and serves as an umbrella of protection within which agility and innovation thrive.
Here is what your business should do to maximise speed.
By Veniamin Simonov, Director of Product Management at NAKIVO.
Technology might be helping organisations run ‘business as usual’ in light of current events, but as they are forced to work remotely – even legislative bodies like the European Parliament – businesses are currently producing a dramatic increase in data. This data, in turn, needs to be processed, stored and protected.
The demand for businesses to maintain their usual activities using technology software is growing with each new announcement from governments around the world, and with this, data protection concerns are also growing. Therefore, one of the best actions for companies to take in order to secure themselves against any type of data loss is to implement a comprehensive backup of their virtual machines (VMs), significantly boosting backup reliability.
However, if hundreds or thousands of VMs are running 24/7, backup speed becomes a serious issue. Yet, if your business wishes to scale out, boosting VM backup should be a top priority. While there might be several elements that are slowing down your daily activities, it is important to identify them and consider adopting new approaches. But how can businesses do this?
Identify potential risk factors
It’s important to identify whether your VM backup speed is slower than expected due to insufficient network bandwidth. Or, alternatively, is the write speed of your target storage limited? If so, this means issues like these relate mostly to resource availability versus cost, and simply cannot be improved without financial and infrastructure investment.
However, one of the major factors of why your VM speed is reduced may be a lack of agility in your current backup software. This means that your business is still wasting server resources on outdated legacy backup solutions which are lagging far behind the ever-growing demands of virtualized environments. In this case, organisations must consider adopting alternative solutions that would allow them to reach the maximum possible VM backup speed and simultaneously maintain the resilience of their virtual infrastructure.
Embrace synthetic backup solutions
Since most modern backup solutions are designed to deliver the fastest performance possible, replacing legacy solutions for such offerings would be a logical choice if you wish to increase the speed of your VM backup and operate your infrastructure effectively and efficiently. For organisations looking to speed up their VM backup performance, the recently emerged synthetic backup solution might be the perfect fit. As backups via this system are created in the repository, the load on the source server is reduced significantly.
Furthermore, since full periodic backups are not needed, businesses would only be required to create a full backup of their VMs once, meaning all jobs are then forever-incremental. As synthetic backups heavily rely on Changed Block Tracking (CBT) and Resilient Change Tracking (RCT) technologies, changed data blocks made to the VM are tracked and transferred to the backup repository. Ultimately, using synthetic solutions drastically reduces the size of the backup, therefore allowing businesses to maximise their backup speed. Each recovery point “memorises” data blocks that should be used for an entire machine restoration, and therefore there is no need to run the backup continuously.
Organise your backup jobs
If you find yourself struggling with an overloaded system, chances are your business is running too many backup jobs on the same host and at the same time. The more intensive the workload, the more your network resources are stretched and infrastructure performance becomes critically slow. To avoid this, businesses should take steps to carefully organise their backup activities.
Make sure to schedule backup jobs at times when there is least activity and ensure the shortest backup windows possible. Since some applications are operating 24/7, monitoring the traffic will help speed up your backup performance. Additionally, selecting and grouping your backups jobs can also help maximise the speed. However, if your business is running data protection in a large virtual environment, some backup jobs might overlap. Luckily, modern solutions offer Calendar Dashboards which take away the burden of having to manually monitor each backup activity and give you a bird’s-eye view of all your jobs.
Integrate a NAS appliance
Installing Network Attached Storage (NAS) based appliance solutions – a combination of high-performance backup software, hardware, and storage in a single device – can help businesses double their backup speed. Such an appliance does not require much in resources and the installation is as easy as setting up any other preconfigured VM backup solution on any NAS device available in your infrastructure.
By using a NAS-based solution, businesses can significantly offload their infrastructure backup workloads and separate data protection from the virtualized environment to generate a VM backup performance boost twice the size of what is usually achieved by legacy backup solutions. Using a NAS-based solution can also help organisations benefit from up to five times lower costs and reduced backup size. By separating VM backups, businesses can rest assured that their VMs can be restored even if their primary infrastructure is down.
It is now more important than ever to replace outdated legacy systems that don’t align with your organisation’s needs. Instead, modern VM backup solutions can help overcome issues that could appear when businesses operate their VMs 24/7. Adopting modern software is a good alternative to speeding up the backing up of your VM data.
Following on from our highly successful coverage of the healthcare sector in the April issue of Digitalisation World, the May DW includes a major focus on the technology challenges and opportunities as the coronavirus continues to cause major disruption to almost every aspect of our lives. The feature runs throughout the magazine, and includes a range of articles and news updates. Part 11.
How a team of TCS scientists used AI to identify 31 molecules that hold promise towards a drug discovery.
In TCS’ Innovation Lab in Hyderabad, India, a team of TCS scientists have identified 31 molecular compounds that hold promise towards finding a cure for COVID-19. This effort is part of the many worldwide mission-critical activities that TCS is engaged in, working with private enterprise and governmental groups. It represents a crucial breakthrough supporting the larger worldwide endeavor towards combating the coronavirus.
Notably, TCS leveraged its prowess in Artificial Intelligence (AI) as a key part of this discovery process.
“The use of AI has considerably shortened the initial drug design process from several months to only a few days,” said Dr Gopalakrishnan Bulusu, a principal scientist involved in the project.
Artificial Intelligence for a Real Health Crisis
The group of scientists in the Life Sciences Research unit of TCS Innovation Labs began by deciding to strategically apply their mind to de-novo drug design – a current focus area of their research. But first, they had to set up an AI model. The fundamental strength of AI is that it can rapidly evaluate multiple scenarios with a multitude of parameters while problem-solving. But any AI model must first be trained to learn the grammar of the subject language -- in this case, medicinal chemistry -- before it can start suggesting possible scenarios that could build towards a solution.
It is important to remember that the molecular universe comprises zillions of molecules, and the world of chemistry has probably looked at just about a 100 million of these. The next step was to ask the AI model about the specific case; in this instance, the SARS-Coronavirus-2 (SARS-CoV-2), the virus that has spawned the disease, COVID-19.
“We knew that the SARS-CoV-2 has a protease protein that is responsible for viral replication. What followed next was to ask the model to generate novel small molecules de novo which have protease inhibiting capability and could bind the target protease protein with high affinity,” said Dr Bulusu. “We filtered the suggestions of the AI model to a set of 1,450 molecules, and further shortlisted 31 that we determined would be good to start with and that could possibly be synthesized for further testing,” he further explained.
While these molecules capture the features of protease inhibitors, they are predicted to be much better binders than the existing protease inhibitors under clinical trial. The 1results from this research -- put together by Dr Bulusu, Dr Arijit Roy, Dr Navneet Bung and Ms. Sowmya Krishnan -- have been published in the preprint open access chemistry archive, ‘ChemRxiv’. “The use of AI has considerably shortened the initial drug design process from several months to only a few days,” Dr Bulusu added.
Collaborating to take this discovery forward
Following the preprint research being made public, the TCS team is working closely with India’s Council for Scientific and Industrial Research (CSIR) that has agreed to provide its labs for the synthesized testing of these 31 molecular compounds. Much remains to be done before the process can move from drug design to drug discovery and finally, drug development at scale. One of the greatest challenges the international scientific community faces in relation to COVID-19 research is that not everyone has all the information. There are a lot of pieces that need to come together. Furthermore, given the risk involved in isolating and containing the virus itself, testing is that much more challenging.
TCS’ scientific ethos has been to work at discovery and to learn from the outcomes of every experiment. In that sense, no scientific research endeavor is ever a failed exercise. “We have to keep trying, keep testing, and keep learning from every experiment,” Dr Bulusu concluded.
For now, the TCS R&I team has taken its first small step in a collective global scientific exercise, steadfastly focused on a big drug discovery that a world in lockdown anxiously awaits.
IT in “hyper-care” mode to meet unprecedented digital demand.
The COVID-19 pandemic is putting unprecedented stress on digital services and websites, with technical incidents doubling since the start of March. Yet new data from PagerDuty, Inc. (NYSE:PD), a global leader in digital operations management, indicates IT teams are rising to the challenge, resolving incidents up to 63 percent faster than before the crisis.
Despite being under significant pressure, high-stress verticals are reacting particularly well. For example, companies in online learning saw incidents grow 11x, but are resolving incidents 39 percent faster than before the crisis, PagerDuty data shows. Collaboration services have seen an 8.5x jump in incidents, but are posting 21 percent faster response times. The entertainment vertical is resolving incidents 63 percent faster despite a 3x bump in need.
“Companies have shifted into hyper-care mode, knowing that there are more people online than ever before and expectations on digital services are higher than ever,” says Rachel Obstler, VP of Product for PagerDuty. “Playing a key role in this hypercare strategy is automated incident response, which allows IT teams to identify, contextualise and resolve the most critical incidents in minutes — despite the surge in digital stress presented by COVID-19.”
Hypercare mode, as described by PagerDuty, typically sees IT departments operating in a heightened state of readiness through additional monitoring for top tier services, extra people available on call, and a focus on reliability, scalability and quality of service. This can entail pausing non-essential features or deployments so mission-critical ones perform effectively, reallocating employees from new features to essential “keep the lights on” services and bringing the right signals and contextual data to the right people proactively, so they can get ahead of any slowdowns or errors that could impact the customer if left unchecked.
Ms. Obstler concludes, “It’s really impressive to see what IT teams are doing ‘under the hood’ right now to keep customers online and happy. On top of surging digital demand, IT is also having to spin up remote Network Operations Centres, create new processes and virtualise new infrastructure on the fly — all the while with kids and family life at their shoulder.”
The Norwich-born firm’s technology is being used by the NHS free of charge to provide personalised, contextual advice on self-isolation to staff showing COVID-19 symptoms.
Leading intelligent automation firm, Rainbird, has partnered with the NHS, using its rapid deployment programme to build an online interactive tool that provides tailored advice on appropriate self-isolation measures to NHS staff.
The tool, which was developed to address the staff resourcing challenges currently faced by the NHS, can be updated within minutes to reflect shifts in national guidance, and personalised to NHS organisations. This partnership follows an open letter from the Rainbird CEO offering its technology free of charge to those supporting vulnerable individuals during this time.
The tool, which is currently being used by the Norfolk and Norwich University Hospitals Foundation Trust (NNUH), will help hospital services to manage their staff resourcing challenges. It will also provide appropriate next steps to those with COVID-19 symptoms by combining NHS and government guidance. The tool is also being extended to give advice on which workers to test for COVID-19.
The tool requires staff members to enter identification credentials, before being taken through a series of questions on the presence of symptoms and how long they and other members of their household have experienced them. Staff are instantly provided with a PDF document that can be presented to their supervisor and explains guidance created specifically for that member of staff.
Dr Robert Hardman, Consultant in Occupational Medicine at NNUH, said: "This is a vital tool for a number of reasons - not only does it help to protect front-line staff and their colleagues in this time of great need, it will also significantly reduce pressure on occupational health services by giving staff another option for support if they develop symptoms.”
He continued: “Unfortunately, there is still some confusion around how the guidelines on self-isolation apply to NHS workers, which could lead to preventable infection and added strain on our service by longer than needed isolation. This resource will remove that uncertainty and allow staff to inform their line manager they will be absent in one seamless process.”
James Duez, CEO of Rainbird, commented: “We are fully committed, at an individual and organisational level, to doing all we can to support those combatting this global outbreak. Especially our NHS. Our flexible technology can be quickly adapted to suit the nuanced and complex individual circumstances of NHS workers, effectively turning what would be a software development task to a clinical one and markedly streamlining operations.”
“Already, we are planning other forms of support, including a comprehensive risk assessment for staff, taking into account pre-existing conditions, their ability to wear PPE and other factors. We see immense potential for this tool, something reinforced by approaches Rainbird has had from other organisations seeking to push this technology to other countries, including those within Africa where the disease and preparedness is tracking some weeks behind the UK.”
Battery manufacturer GS Yuasa are pleased to be able to contribute to the fight against Coronavirus by supplying power to the NHS Nightingale hospitals and other key medical projects.
Since the beginning of the outbreak, GS Yuasa have supplied Yuasa-branded Uninterruptible Power Supply (UPS) system batteries to the new NHS Nightingale hospitals, including London, Birmingham Manchester, Bristol, Glasgow and Newcastle, which will treat COVID-19 patients. The largest hospital at The ExCel Centre received its first patients recently and has the capacity to expand to up to 4,000 beds.
Yuasa VRLA (valve regulated lead acid) batteries are used as standby backup power in UPS systems which ensure the power remains on in the event of a mains electricity failure. The batteries are supplied directly to customers who are installing the UPS systems on site.
GS Yuasa have focused all their resources on supporting critical infrastructure and medical projects during the outbreak. Orders associated with the fight against the virus are being prioritised over all others, with stock and production ringfenced to ensure good availability and fast delivery is maintained at all times.
James Hylton, Managing Director of GS Yuasa Battery Sales UK Ltd, said: “The enormous impact of Coronavirus has been felt by individuals and businesses around the world. We are proud that our batteries have been chosen to back up these key NHS Nightingale hospitals and humbled that because of the quality of our batteries we are able to make a small contribution to the national effort.”
GS Yuasa has also supplied thousands of Yuasa-branded VRLA batteries for use in critical infrastructure nationwide, including other medical facilities. Most Yuasa batteries supplied for these projects have been produced at GS Yuasa’s UK manufacturing facility in Ebbw Vale, South Wales.
The UPS systems installed in hospitals, and many other applications, rely on VRLA batteries to supply electricity in the event of a mains power outage. This power bridges the gap between a mains failure and the moment an emergency generator kicks in. The UPS system ensures that critical systems remain active despite the failure. Each UPS system typically contains hundreds of batteries assembled on specialist racking.
James continues: “We are prioritising our stock at this time to support these crucial projects and our team have been working hard to ensure we maintain high levels of availability and service in order to provide our partners with coordinated on time delivery.
“Our batteries are a key component for a wide range of emergency back-up infrastructure as well as a critical part for vehicles that are required for essential commuting and the distribution of goods.”
Blockchain is among the most discussed and exciting contemporary tech trends, having come into being as a public ledger for transactions for the bitcoin digital currency, and without the need for a central governing body.
By Liam Butler, Area Vice President at SumTotal.
But its wider impact has been to transform the way we validate transactions across all kinds of important applications, and a whole range of industries from finance and insurance to the Internet of Things (IoT), smart appliances and healthcare.
That list also includes Human Capital Management (HCM), with blockchain-enabled solutions already seen in practice. An early example appeared in 2016, when Blockcerts arrived as an ‘open standard for blockchain credentials’, allowing any school to issue and verify blockchain-based educational credentials. When educational powerhouse MIT piloted a digital diploma in 2017, real momentum had arrived.
The Internet of Careers
There remains a wider need to address the growing concern over data privacy and fraud across various employment disciplines. There is also a broad recognition that in the future, career credentials will become one of the most sought after and valuable assets for workers, enabling them to navigate through the jobs market.
In particular, individuals and businesses alike are seeking ways to both verify and control personal career records as a better alternative to digital identities that are vulnerable to hacks and misuse, replacing them with a system that is secure, immutable, and gives control to the users. Blockchain is again offering a method for building a system that can be trusted by employers and employees alike, that is free from the control of a monopoly provider or the public sector.
This system can be more usefully described as the ‘Internet of Careers’, and is a concept that allows individuals to oversee what and where their personal data is stored, who has access to which elements of the data and for how long, and where and how the data is used. And since blockchain technology is decentralised, no one party is ever in control, with consensus across the ecosystem required before new transactions can be recorded. After that point, those items cannot be altered, ensuring safe and secure transactions.
From principle to practice
By reinventing how career records are shared across the global labour market, the industry is making important progress in delivering on the Internet of Careers concept. This includes looking at how the same principles can be applied to other career records such as job roles, certifications, promotions, skills and salaries.
In practical terms, these developments have recently seen 14 industry leaders from the HCM and education markets join together to form the Velocity Network Foundation. This is a vendor-neutral, nonprofit organisation that aims to define, deploy and champion the Velocity Network – a globally accessible, open-source, blockchain-powered incarnation of the Internet of Careers.
The Velocity Network will make it possible for people to claim and manage their career credentials. This will include verified education, projects, work history, skills and talent assessments, with their owner choosing with whom to share this information and how others use this data. At the same time, employers and educational institutions can rely on trusted, immutable applicant, candidate, student and employee information, seamlessly and effectively. Through this, they can hope to achieve significant reductions in the time and costs associated with talent processes, while reducing risk through decisions based on reliable data and supporting compliance in today’s global jobs market.
Combined confidence
The scope for establishing the Internet of Careers is already significant, with the Velocity Network Foundation’s initial membership already having career-related data for more than 700 million individuals in their combined systems. This also includes billions of professional and student credentials and assessments, together with employment and contract work records across job information, pay histories, competencies and more. As the network grows, so will the effectiveness of a system described by one of the founders as a “true public utility layer.”
This is undoubtedly a ‘win-win’ for employers and employees alike, with the system geared towards building proof, confidence and formality into an ecosystem that has remained stubbornly resistant to verifiable authenticity. As the industry comes together under a common and independent banner, we are more likely to see the emergence of an employment market that balances the need for transparency and truth with security, privacy and the rights of individuals to control their own data.
Following on from our highly successful coverage of the healthcare sector in the April issue of Digitalisation World, the May DW includes a major focus on the technology challenges and opportunities as the coronavirus continues to cause major disruption to almost every aspect of our lives. The feature runs throughout the magazine, and includes a range of articles and news updates. Part 12.
Exploring the post-COVID tech lessons and legacy.
By Tim Hood, associate vice president for Hyland in EMEA.
COVID-19 has changed everything. As yet, nobody can predict in what ways. However, whatever your business sector you will face a 'new normality' that presents organisational and financial challenges you have never encountered before.
For many organisations, the last few months have already required a total reinvention of working practices and an acceleration in their digital transformation – embracing technology is no longer a matter of choice, but the single most important key to corporate success.
Since few were prepared for government mandated social distancing measures, businesses have had to implement quick-fixes based on technology never intended or designed for remote working. But these are short-term sticking plasters that can’t be kept in place forever.
Now with signs of social distancing easing, organisations are looking at how to move forward and, for some, that will mean returning to old ways of working. Wiser heads though may recognise we are at a tipping point and that though exceptionally disruptive, now is the time to create better business models, more suited to a world being remade by rapid change.
Those that fully appreciate this will start to reprioritise investment as part of a 'never the same again' strategy, in readiness for further challenges.
How should they start?
In the first instance, by objectively reviewing existing practices and preconceptions, especially about remote working. "We've never done it like this" is no longer a valid or rational response.
Understandably, some will remain reluctant to embrace homeworking because of fears about employee availability and productivity. However, such concerns are largely unfounded, as research consistently shows those working from home tend to be more productive. In fact, remote employees put in nearly 17 more days every year than their office-based counterparts and lose less time to distractions.
In many organisations, homeworking is already an integral part of business life. Owl Labs' State of Remote Work 2019 global survey revealed that over two-thirds of full-time professionals work remotely at least once a month, with one-fifth doing so exclusively. Today, it is reasonable to assume the numbers are significantly higher.
So those organisations still resisting remote working are swimming against the tide and may have no choice but to accept this. Of course, to work remotely, employees need access to accurate and up-to-date information. Unfortunately, social distancing has exposed previously hidden flaws in many processes. Just because departmental culture lets you find information relatively quickly when everyone is in the office, it doesn't mean you have a good system in place. And this is a challenge that goes beyond just internal applications and sharing. More than ever, customers also expect easy access to key information, which makes remote access to internal and external content a key priority.
Similarly, the last few weeks have been particularly problematic for organisations still reliant on paper-based systems. It’s hard to remote work when the data you want is undergoing its own self-isolation in filing cabinets 30 miles away, reinforcing the need to automate processes and minimise manual intervention.
So, companies must address how information is accessed and shared within and between departments. Too often, it is trapped in silos that limit access and allow for the creation of multiple versions of the same document. That’s neither good for decision-making or service provision. The problem is compounded when information is fragmented across applications – it's not unusual for large organisations to have business-critical data spread over more than 200 apps.
If companies are to rise from this upheaval, they must rethink their entire information ecosystem. Many are already actively re-prioritising investment, focusing on the introduction of platforms capable of enabling a wider digital transformation. Such content services hubs will be at the heart of their central IT infrastructure and accessible through a user interface that stays the same regardless of whether they are seated at a desk at home or in the office. The move towards intelligent automation will help accelerate this process and further enhance the practical and financial advantages of having document storage and workflows in a single location.
COVID-19 and the world's response to it, has permanently reshaped the business landscape. Those that don’t recognise this and continue to treat the current global health climate as a temporary disruptor, not only risk losing out to competitors who reinvent themselves but also start-ups that build this new reality into their DNA from the get-go.
Fortunately, technology does provide us with the means not just to work our way through a lockdown but to use it as a catalyst for creating a truly effective digital workplace that extends beyond the four walls of the corporate high-rise to each and every remote worker at home. This is not a retrograde step but a progressive one and for many businesses it will help strengthen their own future.
By Ian Bitterlin
In this article Ian Bitterlin, of Critical Facilities Consulting a DCA Corporate Partner, provides his view on ‘high performance’ data centres – he examines ICT load in Data Centres and then discusses the performance of servers, along with their power consumption.
The idea that we can talk about a ‘high performance’ data centre in relation to others raises many more questions than suitable answers, so where to start? The most logical place is with the ICT load – or what the data centre is intended ‘for’, and this can be distilled down to the three basic definitions of a data centre load; compute, storage and connectivity. For example, a site for social networking based on users’ photo uploads and minor text comments with a few buttons to click such as ‘like’, ‘report as abusive’ or ‘share’ (no prizes for guessing who!) which will need yards of bandwidth, acres of storage but very little computation. Or an onsite data centre for a particle accelerator that creates a Petabyte of data a minute and then expends vast computation capacity producing massive data sets but hardly speaks to the outside world. And every combination of the three loads in between. Each in their own way could be classed as ‘high performance’ and, perhaps, we could add ‘security’ as another performance attribute?
However, to date, most data centres are not built or fitted out for specific applications and they purchase commercially available integrated servers to run multi-app enterprises, multi-user collocation or ‘clouds’ with flexible configurations set by the user. Some parts of each could be classed as high-performance but how could we rank them into low/medium/high?
The problem is that it is a constantly movable feast with the release date of the server setting the bar; the more modern, the faster, with the ability to crunch numbers in ever-more operations per second and at ever-less Watts per operation with hardly an increase in real cost compared to the technology capacity curve. There is also a trend for ever-lower idle power if the user does not utilise their hardware as they should. This server improvement trajectory is mostly ignored by people trying to criticize the rising energy demand of data centres – imagine what the energy growth would be like if the ICT hardware were not also on an exponential improvement curve: In round terms data traffic has been growing at 60%CAGR for the past 15 years and the hardware capacity at 50%CAGR, such that data centre loads have grown at only 10%CAGR. There is now plenty of evidence that data traffic has flattened out in mature markets, including the UK, and data centre energy is stabilising for the moment – but that is another story…
So how far and quickly have servers come? A good source of server performance data is the SPEC website where several hundred servers are listed by the OEMs, along with their performance against the SPEC Power software loading routines. The SPEC Power test regimes do not represent all loads (or any specific) but as a benchmark are very informative, although, as usual, ‘there are other test routines available’ etc. For example, we can use SPEC to compare two servers that are both in service today but about 6 years apart in market release date. I will call them Server-A and Server-B although you can do your own comparisons by looking at the SPEC listings for free.
Server-A had a rated power consumption at 100% SPEC load of 394W and performed 26,880 operations per second – which resulted in 45 Operations/Watt. Its idle power (when doing no work at all) was 226W which is a surprising 57% although that compared well to some other servers of the time which idled at a rather depressing 79%.
In comparison, Server-B has a rated power consumption at 100% SPEC load of 329W (17% lower) and performs 4,009,213 operations per second (88,000x more) – which results in 12,212 Operations/Watt (270x more). About five years ago a white paper was written which predicted that the lower limit for idle power in silicon chips was going to be 20% but Server-B managed 13% (44W) in 2017.
So, Server-B (not hugely different in purchase cost to Server-A) can, on power consumption, alone replace at least 270 modules of Server-A, a remarkable consolidation of 40 ICT cabinets into one. However, when Server-A was released it was a very popular machine and offered huge performance compared to what went before it and at a lower cost. I remember seeing an HPC installation in the UK handling data sets for oil and gas exploration surveys using Server-A but I would assume that it is now, no doubt, upgraded.
So, what ‘was’ high-performance is now painfully slow compared to a modern machine and a data centre using the latest commercial hardware is capable of very high performance indeed…
That is not to say that every server suits every application of type of load, quite the opposite, nor is it true to assume that software is being improved in resource efficiency, again quite the opposite is the reality. This subject, of matching the server to the application and ensuring high utilisation through right sizing and virtualisation is a key feature of the Open Source designs of Facebook but it has to be noted that having such a single and simple application helps them achieve very high performance that few others are able to emulate, but that must be the target: Refreshing the hardware every sub-3 years and heavily virtualising must become the norm in enterprise and collocation facilities if we are to meet our zero carbon targets.
Critical Facilities Consulting (CFC) is a DCA Corporate Partner, owned by Professor Ian Bitterlin a Data Centre Industry Expert.
For more information on how CFC and Ian could assist you get in touch, email: bitterlin@criticalfacilities.co.uk
By Mat Jordan, Head of EMEA - Procurri
There’s no doubt that as devices get smaller and technology shrinks in everything but its societal impact, one aspect of IT remains big, bulky and sometimes at big-ticket pricing: and that’s Data Centres. When managing a Data Centre, businesses need to remain as streamlined as possible whilst not compromising on service or performance. In the unfortunate event that something should go wrong with a Data Centre, recovering the service back to normality can be time-consuming and expensive – and if service is too interrupted, could result in end user loss., It’s imperative that performance runs at optimal levels at all times; to not just avert a business continuity crisis, but also to deliver the best possible customer experience.
How best to do this? Procurri has you covered.
Comprehensively support your infrastructure – no matter how old it is!
If your Data Centre is working well, there may be no need to unduly update its equipment and undergo a lengthy and disruptive purchase and installation process. Instead, invest in out-of-warranty support so that there’s no risk of having to rush out and purchase newer equipment should there be an issue; as this will only serve to extend service interruptions and worsen customer experience.
Procurri’s out-of-warranty support works on a vendor-neutral basis, with no favours or preferences given to users of some brands over others. Service users can use one point of contact to access a global support network of multi-lingual product experts around the world, available 24-hours a day, 7-days a week, 52-weeks a year. Should on-the-phone support not suffice to resolve any issues, Procurri’s local product specialists will attend in person to get everything back to business-as-usual for you as soon as possible.
Ensure product deployment is well planned
At some point in time, equipment will need replacing – either with like-for-like hardware or something newer. Procurri offers a range of deployment packages, each crafted bespoke for client’s needs with customisable support levels.
Precise labour and parts planning is managed with your business on a strategic level to ensure on-site deployment is efficient and as seamless as it can be: because striving for the best possible customer service lies at the heart of everything we do, and that includes for your customers as well as you as one of ours!
Invest in Thorough and Ethical ITAD (IT Asset Disposition)
When the time does come to dispose of old data centre hardware, the job at hand is often difficult – as equipment is bulky, there are few specialists in the sector, and varying degrees of data cleansing must take place (dependent on the local legislation and best practice measures).
Procurri take a holistic approach to IT Asset Disposition, following a systematic method that ensures industry-approved management and treatment of equipment throughout.
Managing ITAD from end-to-end, Procurri take care of the entire process from verification to the eventual disposal and/or recycling/refurbishment of the equipment. This includes undertaking thorough and secure data erasure and destruction, responsible and ethical disposal of electronic waste, comprehensive disk sanitisation and the adherence to relevant laws or legislation throughout the process. Procurri is compliant with ISO standards relevant to the area but will also always comply with the geographic region’s requirements as necessary; and where possible, the industry best practice standards that go over and above those usual levels.
Should equipment be able to be thoroughly data cleansed and still be usable, Procurri will strive to refurbish and recycle it so that another business is able to use it and its lifespan can be extended. This is great for the business disposing of it, the one receiving it, and for the environment – it’s a win-win-win!
Working efficiently and ethically can not just enhance the performance of your data centre, it can help extend its lifespan too: so, you can focus on the most important thing about your business… your customers!
Procurri are DCA Corporate Partners
Procurri has 14 offices and four global warehouse distribution centres spanning EMEA, the Americas and APAC. If your end users have data centres in multiple countries, you can leverage Procurri as a global, single point of contact to help you seamlessly deliver valuable, enterprise data centre products, services and software to your customer.
Tel: +44(0) 1285 642222
Email: enquiry.uk@procurri.com
Steve Bowes-Phipps DCA Advisory Board Member and Senior Data Centre Consultant, PTS Consulting
Senior Data Centre Consultant for PTS, I sit on the advisory Board of The DCA (UK Data Centre Trada Association) and sit as Chair on the DCA Special Interest Group for Workforce Development and Capability
I thought it would be helpful to create a brief information video https://youtu.be/DAnsHAZDj_I on seven things you could be doing as a data provider or end user that would perhaps help you in some way during a period of time which we have never experienced before.
1) Maintaining Availability
During this pandemic most data centres will be seeing a huge upturn in traffic and demand due to a lot of home working, online courses and streaming services etc.. so my number one thing that comes up time and time again around Datacentre availability or lack of it shall we say " it’s human error" and if you want to really produce human error then minimise change; change in at Datacentre can be devastated if it’s not managed appropriately and correctly, and even if it is, there's still the opportunity for somebody to make a mistake or to put in place something that impacts a production environment which will take time to rectify , so minimise change and maybe put in place a change freeze for this particular period that’s what I would suggest and you may go some way to minimising any change related outages that may result.
2) Understand your Business
There is a well tried and tested exercise called "Operational Risk Assessment" the framework firstly looks at understanding what it is you need to provide as a business, then what you have in place operationally to assist you in delivering this, where you might have gaps around some of the controls and measures you would use to double check processes are being followed and finally having strategies and tactics in place to eliminate or mitigate any of the risks that have come out of this exercise. This exercise can be tremendously formative and enlightening and is really recommend that you do this as it can make a big difference to the way your organisation could cope with situations such as these.
3) Maintaining a Safe and Healthy Working Environment
During the Covid19 Corona Virus outbreak, particularly in commercial data centres they tend to have concentrated touch points such as kitchenettes, which unfortunately are where the virus could easily be passed on from one person to another, you may want to consider closing these communal areas down at this particular time, yes there's going to be impact on people so communication is key, explain the reason you are doing this and advising how long this restriction might be in place including washing hands and distancing one’s self from others is clearly a sensible move, that way you are preparing those visiting your site so they bring their own food/drink etc if catering or vending machine are no longer available. Whatever it is you do to keep customers and staff informed is a very prudent measure.
4) Reviewing your Disaster Recovery Plans
Now you may or may not have those kinds of plans in place for your own organisation, I hope you do, but the important thing particularly as a provider of services is that you talk to all your vendors and your 3rd parties who are also providing support for you vendor is so parties who also providing support for you because you need to understand what challenges they may be going through, do they themselves have a DR plan in place and do they understand the impact it could have on your business due to a lack of resources on the services they deliver to you. Whether it’s the cleaning company, security or plant maintenance/hands and eyes in the data centre it is vital that you understand what they’re doing to maintain the services they provide you. If necessary. If you don’t feel comfortable this is the case it might be prudent to decide upon an alternative supplier or backup supplier if the worst happens, for example many data centres have contracts with two fuel suppliers, just in case.
5) Communications
Communication with clients and customers is vital, it is really crucial for them to understand what it is you are doing, initially you may wish to do this via email that’s obviously a good medium to start with and sometimes clients but I would like a phone call, but I think at the very least you should consider putting up a status page on your customer portal or website and then keep it regularly updated in light of the Covid19 outbreak and associated restrictions on business, travel and social restrictions which appear are going to be in place for quite some time to come.
6) Staff and People
if your job normally involves meeting people and going to businesses, visiting customers and or sites and you know are no longer able to do that, now is the time to look at all those jobs and tasks that you have put off because you've too busy so you can now get everything up-to-date so whether it is working on that project business plan or doing proactive activities to increase brand awareness which will hopefully bring in more business, this could include preparing industry insight, writing articles, blogs and white papers or even improving your skills by taking some online training courses for you or your staff so everyone is ready for the upswing which is inevitably going to happen as we globally come out the other side. Because I know a lot of plans are currently on hold and at some point soon there will be a massive rush to get things done you are going to want to be prepared for that rush to come, and the better you are prepared the more likely you are going to benefit from it and that leads me onto No 7.
7) Only Make People Redundant as a very Last Resort
My final point is actually a plea not to fire or make people redundant until you really have to, and until you have seriously investigated all other options there might be first. See what Government help there might be, talking to suppliers about deferring payment or coming up with a repayment plan or you get paid quicker in order to support your pay pipeline. Whatever it is you do try and keep those people employed as long as you can. There are several good reasons for this; one is "it shows leadership in the industry, it shows that you care about your staff, it shows you value them and that you continue to provide a place where people want to come to work and shows that you are investing in them for the long term through both to good and bad times and not just reacting to short sharp shocks like the one we are currently experiencing.
Secondly as I said in the end in number six there is quite likely going to be a huge amount of work coming down the line and as I have seen in my role as chair of the with the Workforce Development and Capability Special Interest Group for the Data Centre Alliance and I'm constantly meeting with my colleagues and peers and talking about how difficult it is even before the outbreak to get people on board with the right kind of skills but can’t experience and do you want to exacerbate the problem for yourselves by getting rid of all those people with great knowledge of your organisation? Of course, you don’t, because when your business improves and starts picking up again you what to be there taking advantage and hit the ground running with all the resources you need in place when that starting gun is fired. So hold on to people for as long as you can, get whatever grants you can or loan deferrals there are available and whatever you need because this will subside we will get to the other side and your need those people know what you want to increase your business as well.
Conclusion
So that’s just some of my thoughts and I hope there was something in there for you. I’m happy for you to reach out to me, you should be able to find my profile on LinkedIn under PTS Consulting. Please do feel free to do that I’m sure that the Data Centre Alliance will have their own section on this critical time period relating to Covid19 and there will be lots of interesting and informative stuff on there that you want to keep the viewing so look out for that as there will be a lot of people contributing to the information all will have more great insight to how we can make the best of the situation and be positive and I hope that you, all your family, colleagues and business remain healthy.
By Stephen Whatling, Chairman at Business Critical Solutions, BCS
The changing landscape
The datacentre landscape is fundamentally changing and alongside the hyperscale development, we are also seeing an increasing market towards edge data centres to support a growing need for greater connectivity and data availability. Whilst the decentralised data centre model has been around in various guises for some time, it fell out of favour for a lot of businesses as they sought to exploit the efficiencies of operating fewer, larger datacentres.
However the phenomenal growth of The Internet of Things (IoT) is driving a resurgence in its popularity. Cisco is predicting that in the five years upto 2022, 1.4Bn internet users will have been added, there will be 10.5Bn more devices and connections and broadband speeds will have increased by over 90%. Only edge networks can provide the high connectivity and low latency required by the IoT to meet users’ expectations and demands for instant access to content and services.
The rise of AI
In addition, the rise of AI and immersive technologies such as virtual and augmented reality (VR/AR) is also a factor that will help drive this move. Whilst not perhaps mainstream yet many sectors are assessing the benefits. For example, in the manufacturing environment, the now ubiquitous robots on many production lines can be improved and their role expanded by AI. A recent report by the Manufacturer
(26th February 2019) found that 92% of senior manufacturing leaders believe that the ‘smart factory’ will help them increase productivity and empower their staff to work smarter but a similar Forrester report also found that only one in eight large manufacturing businesses are using any form of AI. However, these kinds of innovations require a lot of computing power and an almost immediate response as a single machine that ‘pauses for thought’ could create a knock effect that causes immeasurable damage to the factory, production line and productivity. Once again edge computing is best placed to support this.
In the case of AI and AR, speed is an important factor. In the edge decision making is held closer to the point of need and as a result the reduction in latency between the device and the processing power enables a much faster response time. Equally importantly the data itself can be better managed in an edge environment. The data is often governed by local legislation and now it can be held in smaller data centres closer to the point of use it becomes easier to meet the legal requirements in the local region.
Data Security
One of the major factors that needs to be considered is data centre security with cyberattacks increasing in both frequency and scale. Problems originating from the physical infrastructure have also been found to be behind outages in recent years. Some experts have suggested that edge computing potentially represents a soft underbelly for cyber security. For some the use of the word ‘edge’ has allowed users to assume the security of these systems is not as important as local or Cloud systems. However, moving forward clients will be expecting significant investment in security and disaster recovery processes as well as the physical maintenance and security of these localised data centres.
Investment in Telecoms
Another key consideration is that the increasing adoption of edge and cloud-based infrastructure for both social and business use is also placing greater demands on the distribution network in terms of latency, bandwidth and capacity. The increase in data over the next five years will place a lot of pressure on the telecoms network. It is the telecoms industry that will need to continue to invest and upgrade capacity to ensure that the infrastructure supports the growing demand for data flows to and from the edge and the cloud. Our Summer Report, which is available to download from our website, also highlights this issue with three-quarters of respondents agreeing that the telecoms industry needed to provide this investment. Less than 2% of all those surveyed believed that the current infrastructure would be able to support the current predictions of growth in data. This is likely cause for concern.
The need for Power
Similarly, these new data centres will need power. The thousands of servers across all connected countries will need to be located and designed with energy in mind. It is perhaps worth noting too that countries that can’t support the wider network demands will quickly fall behind in the race to realise the value of AI and AR.
The Opportunity
There is no doubt that massive increase in the data that is available from billions of devices and the rise of AI is both an opportunity and a challenge for businesses. Companies that can handle the scale, analyse the data and monetise its true value will have a real advantage. Edge computing will be able to handle more than a traditional network with many more transactions per second over many more locations and architectures but how and when will this infrastructure be delivered?
Conclusion
The fact that half of our respondents believe that edge computing will be the biggest driver of new datacentres tallies with our own convictions. We believe that the edge of the network will continue to be at the epicentre of innovation in the datacentre space and we are seeing a strong increase in the number of clients coming to us for help with the development of their edge strategy and rollouts.
In our view, the recent trend of migrating computing power and workload from in-house, on-site data centres to remote cloud-based servers and services will reverse a little. The next evolution, led by the need to make more and more decisions with little or no discernible delay, will see a move towards computing power being closer to the source of the user and the data that needs to be processed. More and more connected devices relying on the edge means more and more data centres, probably smaller than the typical Cloud data centre but no less important. With future trade, manufacturing, autonomous vehicles, city traffic systems and many other valuable applications relying on edge computing the security and maintenance of these systems will be paramount. However, there is no doubt that edge computing forms part of the future data centre landscape
About BCS:
BCS (Business Critical Solutions) is a specialist professional services provider to the international digital infrastructure industry. As the only company in the world that is dedicated to providing a full range of services solely within the business critical and technical real estate environments, we have successfully delivered over 1500 MW of mission critical data centre space in 24 countries. Privately owned, the company acts as a trusted advisor and partner to a wide range of international clients whose data centre estate is critical to their success. Key clients include: leading organisations in the colocation and wholesale data centre sector; global technology companies; landlords and data centre operators; as well as two of the biggest data centre developers in the world.
If the 2008 global financial meltdown was the (largely ignored) warning, then the current COVID-19 pandemic is the actual event which is causing governments across the globe to re-evaluate the past, the present and, most importantly, the future. There is a once in a lifetime opportunity to re-fashion almost every aspect of a country’s makeup – health, education, travel, work, leisure, government itself.
The airline industry is a great example. Globally, the expansion in air travel over the past 20 or so years has been extraordinary (from just under one and half billion passengers in 1998 to almost four billion in 2018). Airports all over the world could only see yet more growth (8.2 billion passengers by the mid 2030s). And now, overnight, those numbers have fallen off a cliff.
Furthermore, even the most market-driven, capitalist governments appear to be realising that the environmental benefits of vastly reduce air travel have been remarked upon by large amounts of the population. Coupled with a similar reduction in other forms of polluting transport, the planet has not been this ‘clean’ for quite some time, and, in general, we like it. So, there’s a moment in time where the airline industry can be re-shaped, balancing its need to make a profit, with the need for a more sustainable approach – and maybe, whisper it quietly, even a reduction in flights. Okay, so flight numbers right now are negligible, but it’s how the industry is allowed to scale-up again that is crucial.
I’ve long been a supporter of more and more people working from home/remotely wherever possible. The problem has always been how you break the commuter habit. And now the coronavirus has not so much educated people as to the benefits of working away from a central office, as forced them to adopt this new work style. And it seems that, in the main, we like it.
So, post pandemic, we have a simple, binary choice. As in 2008, we can put some sticking plasters over the mess and carry on merrily on our way in the pursuit of ever more wealth, at the cost of absolutely anything and everything that gets in the way; or, we can use this opportunity to re-evaluate and, ultimately, re-set the combination of values that matter most to a society.
Profit need not be a dirty word, but it can be better balanced alongside risk, the environment, social considerations and the like.
And technology has a massive role to play in such a brave new world. The algorithms that currently buy and sell shares in nanoseconds can also keep an eye on a company’s gearing; the relatively young telemedicine industry can bring quick, accurate and inexpensive medical support to all; videoconferencing can become an everyday activity, and not just be reserved for emergencies.
In 50 years’ time, we just might look back on 2020 as the year when the deaths of a few hundred thousands were the catalyst for a vastly improved, and more balanced, quality of life for billions.
But I’ll not be holding my breath.
Many of the financial business implications of COVID-19 will be felt in the IT department, according to Gartner, Inc. CIOs should take eight actions to protect or quarantine their IT organizations’ cash flow during the coronavirus pandemic.
“Survival, not growth, will be the priority for executives in 2020. Survival will depend on maintaining cash flows and income while continuing to be innovative with technology,” said Chris Ganly, senior research director at Gartner. “Enterprises that fail to act may not survive this disruption or will have their subsequent recovery delayed.”
The eight action items for CIOs to take include:
Place nonessential spend on hold
CIOs should immediately establish what aspects of their current spend can be deferred, eliminated or altered. Attention should be focused on spend that is not yet incurred or committed and is nonessential/discretionary and variable in nature.
Anticipate spend increases
Many organizations and industries that are largely office-based use desktops and fixed office networks and infrastructures. To move the office to remote work, spending increases might come in the form of obtaining and funding laptops, monitors and mobile devices, increased software, VPN and hardware costs and consumption-based communications costs – both voice and data.
“CIOs must anticipate and plan for the increased costs, which in many organizations will be felt in the IT budget. CIOs need to communicate this with business leaders and their CFO to ensure that the costs can be met, as well as the spend can be reduced where possible,” said Mr. Ganly. “For example, if offices or work locations are partially or completely vacated, can enterprise-/office-based utilities, communications/access, infrastructure and services be suspended or deferred? CIOs should carefully consider their cost base and cost categories to anticipate both what increases, and what can decrease, with some action.”
Reduce current spend rates
CIOs must work with the business to reprioritize requirements and set spending levels that they can afford. Spend and actions should be classified in three groupings:
Evaluate all existing investments
CIOs should immediately review all projects that are already in progress. These projects should be separated into two categories – noncritical and critical projects. Noncritical projects should be immediately halted, while critical projects, necessary for immediate cash flow and ongoing survival of the organization, should be reviewed to determine what aspects can be reduced.
Defer any new spend
CIOs should defer or cancel all uncommenced spending on projects, staffing, assets or upgrades and release any retained third-party resources, service or infrastructure expenses related to these.
Reevaluate all existing spending
Beyond tackling the largely discretionary project portfolio, CIOs should also address the current service portfolio to identify opportunities to provide a lower service level.
“CIOs should inspect their organizations’ current consumption levels on all variable operating expenses – for example, cloud services and voice and data communications,” said Mr. Ganly. “On a service-by-service basis, either completely eliminate or take control actions to reduce enterprise-wide consumption levels by restricting or managing supply and renegotiating contract terms as necessary.”
Negotiate consumption down
CIOs should work with business leaders to decide on key changes to operations. Negotiate with the business to terminate services or applications and encourage business users to use less or work in a different way — a way that reduces the variable operating costs, and potentially even the fixed costs, of the business.
Explore alternate financing approaches
CIOs should work with their CFOs to investigate what government or industry financial assistance is available at the federal, state and local level.
Five ways in which AI can help government and healthcare CIOs
Executives responsible for artificial intelligence (AI) strategy, particularly CIOs and CDOs in governmental and healthcare organizations, should leverage AI in five core areas to improve decision making during the coronavirus pandemic, according to Gartner, Inc.
“In the fight against COVID-19, AI offers an important arsenal of weapons,” said Erick Brethenoux, research vice president at Gartner. “It allows predictions to be made about the spread of the virus, helps diagnose cases more quickly and accurately, measures the effectiveness of countermeasures to slow the spread, and optimizes emergency resources, to name a few. The power of AI should not be ignored or only partially leveraged, so long as it is applied in ethically responsible ways.”
The five areas where using AI to combat COVID-19 will have the most impact are:
Early Detection and Epidemic Analysis
AI techniques are used to understand, analyze and predict how and where the virus is spreading or slowing down.
Automated contact tracing, for example, is used to build detailed social interaction graphs by analyzing a myriad of citizen data such as mobile phone locations and public facial recognition and backtracking the movement of people to identify the likely virus source. Individuals who encountered the source can then be notified, tested or quarantined.
“Other AI applications that fall within this area include epidemic forecasting and monitoring the development of herd immunity. Such capabilities are obviously highly relevant in the short term as society tries to ‘flatten the curve’ and minimize the burden on our healthcare systems, but they are also important in the long term if new, hopefully smaller, outbreaks reoccur,” said Mr. Brethenoux.
Containment
Considering the huge societal and economic impact of ‘one-size-fits-all’ measures such as lockdowns, collaboration with non-IT experts is paramount when applying AI to containment efforts.
“Behavior analytics derives new insights by accounting for the dynamics of human behavior, culture and individual thinking to answer questions around social distancing compliance or the emergence of unwanted group behaviors,” said Pieter den Hamer, senior research director at Gartner. “Law enforcement can predict where and when people may not adhere to stay-at-home orders or social distancing through predictive enforcement and dispatch enforcement units accordingly.”
Triage and Diagnosis
The use of AI-enabled self-triage has already gained popularity as telehealth practices, including virtual health assistants, were made available to help people identify if they are possibly infected and what the appropriate next steps are. Augmented medical diagnosis and triage are also key AI capabilities that help in this area.
“AI is known to improve the accuracy of certain diagnoses if augmented with human judgment, especially in more complex cases,” said Mr. den Hamer. “Prognostic modelling, or predicting how the disease will likely develop in patients, can also be used to improve treatment recommendations. The fact that AI has a role to play in assessing patient risk and prognosis is not something to overlook, especially when there is a possible shortage of medical professionals.”
Healthcare Operations
AI plays an important role in streamlining healthcare operations and optimizing scarce hospital resources during a pandemic. Healthcare CIOs and CDOs can use predictive staffing to improve personnel allocation by analyzing anticipated patient numbers and their individual prognosis, and cross referencing them with the availability of qualified medical staff, materials and equipment.
“Remote patient monitoring and alerting with the use of AI also allows patients to stay at home, lowers the burden on hospitals, and enables a better understanding of how symptoms develop over time,” said Mr. den Hamer.
Vaccine Research & Development (R&D)
AI graphs and natural-language processing (NLP) can enable medical researchers to scour through many thousands of relevant reports to draw connections between data at an unprecedented pace. Augmented vaccine R&D also identifies coronavirus countermeasures including those that have already been tested on humans.
“Healthcare CIOs and CDOs should explore every avenue of AI to fight COVID-19 using an ongoing and systematic process of AI application identification and prioritization. Technologists should not overestimate their ability to understand what makes sense from a public health and medical perspective and then work with healthcare professionals to create and actively advertise an open marketplace that shares AI applications, models and data transparently,” said Mr. den Hamer.
Underlying risks set to be exacerbated by COVID-19 pandemic
“Strategic assumptions” remained the top concern for senior executives in first quarter of 2020 as numerous other risks are set to be exacerbated by the current COVID-19 crisis, in Gartner, Inc.’s latest Emerging Risks Monitor Report.
Gartner surveyed 107 senior executives across industries and geographies on the top concerns facing their businesses with results showing that “strategic assumptions” remained the top emerging risk for the second consecutive quarter (see Table 1). The survey was in the field from mid-February to early March of 2020 and reflects only the early stages of the coronavirus crisis.
“Executives had been concerned with the validity of their strategic assumptions well before the current crisis situation,” said Matt Shinkman, vice president with the Gartner Risk and Audit Practice. “The economic and operational fallout as a result of the global COVID-19 pandemic have forced many executives, particularly in the hardest hit industries, to start from scratch, even with a great deal of uncertainty still ahead.”
Table 1. Top Five Risks by Overall Risk Score: 2Q19-1Q20
Rank | 2Q19 | 3Q19 | 4Q19 | 1Q20 |
1 | Pace of Change | Digitalization Misconceptions | Strategic Assumptions | Strategic Assumptions |
2 | Lagging Digitalization | Lagging Digitalization | Cyber-Physical Convergence | Cyber-Physical Convergence |
3 | Talent Shortage | Strategic Assumptions | Extreme Weather Events | 2020 US Presidential Election |
4 | Digitalization Misconceptions | Data Localization | Data Localization | Data Localization |
5 | Data Localization | U.S.-China Trade Talks | U.S.-China Trade Talks | Macroeconomic Stagnation |
Source: Gartner (April 2020)
Crisis Forces Faster Reckoning with Emerging Risks
In addition to the damage caused to already shaky strategic assumptions, senior executives and their enterprise risk management (ERM) teams now face a reckoning with many additional emerging risks that have become heightened from the current crisis. Three additional risks in the top five, cyber-physical convergence, the upcoming U.S. presidential election and the potential for macroeconomic stagnation have all taken on new dimensions and urgency as the crisis has worsened the global economic outlook.
“COVID-19 is a uniquely challenging risk for most organizations to manage in and of itself, but it also acts as a kindling that will spark adjacent risks into much greater intensity,” said Mr. Shinkman. “It’s clear that enterprise risk professionals will be stretched as previous ‘wait and see’ risks require urgent action today.”
Mr. Shinkman pointed to cyber-physical convergence as just one example of an emerging risk that has taken on new dimensions during the crisis. With an increasing number of employees forced to work from home, and a previous Gartner survey indicating that 74% of CFOs plan to make at least some portion of their in-house staff permanently remote, insufficient security practices around operational technology (OT) will only become more vulnerable and easy to exploit in this environment.
ERM Considerations for COVID-19
In additional conversations with more than 100 senior risk executives on March 27th and April 3rd, Gartner identified three common areas of concern and actions underway among this group:
Nearly Three in Four CFOs Plan to Shift at Least 5% of Previously On-Site Employees to Permanently Remote Positions Post-COVID 19
A Gartner, Inc. survey of 317 CFOs and Finance leaders on March 30, 2020 revealed that 74% intend to move at least 5% of their previously on-site workforce to permanently remote positions post-COVID 19.
“This data is an example of the lasting impact the current coronavirus crisis will have on the way companies do business,” said Alexander Bant, practice vice president, research for the Gartner Finance Practice. “CFOs, already under pressure to tightly manage costs, clearly sense an opportunity to realize the cost benefits of a remote workforce. In fact, nearly a quarter of respondents said they will move at least 20% of their on-site employees to permanent remote positions.”
Figure 1: 74% of Companies Plan to Permanently Shift to More Remote Work Post COVID}
Source: Gartner (April, 2020)
With 81% of CFOs previously telling Gartner that they planned to exceed their contractual obligations to hourly workers, remote work is one example of creative cost savings senior finance leaders are seeking in order to avoid more severe cuts and minimize the downside impact to operations. CFOs previously reported to Gartner that they were taking additional steps to support employees in this area by adjusting to more flexible work schedules and providing company-issued work from home equipment. These action by finance leaders help minimize disruptions workers might be facing as a result of the crisis.
“Most CFOs recognize that technology and society have evolved to make remote work more viable for a wider variety of positions than ever before,” said Mr. Bant. “Within the finance function itself, 90% of CFOs previously reported to us that they expect minimal disruptions to their accounting close process, with almost all activities able to be executed off-site.”
As organizations continue to grapple with the ongoing business disruptions from COVID-19, permanent remote work could complement cost-cutting measures that CFOs have already taken or plan to take. In Gartner’s most recent survey, 20% of respondents indicated they have deferred on-premise technology spend, with an additional 12% planning to do so. An additional 13% of respondents noted they had already made cost reductions in real estate expenses, with another 9% planning to take actions in this area in the coming months.
Worldwide IT spending is now expected to decline 5.1% in constant currency terms this year to $2.25 trillion, as the economic impact of the COVD-19 pandemic continues to drive down some categories of tech spending and short-term business investments. A new update to the IDC Worldwide Black Book Live Edition shows ICT spending, which includes telecom and business services, will decline by 3.4% this year to just over $4 trillion with telecom spending down 0.8%. However, IT infrastructure spending is still projected to grow overall by almost 4% to $237 billion with resilient spending by service providers in addition to ongoing enterprise demand for cloud services offsetting declines in business capital spending.
"Inevitably a major economic recession, in Q2 especially, will translate into some big short-term reductions in IT spending by those companies and industries that are directly impacted," said Stephen Minton, program vice president in IDC's Customer Insights & Analysis group. "Some firms will cut capital spending and others will either delay new projects or seek to cut costs in other ways. But there are also signs that some parts of the IT market may be more resilient to this economic crash in relative terms than previous recessions with technology now more integral to business operations and continuity than at any time in history."
% Growth 2020 | January Forecast | February Forecast | March Forecast | April Forecast |
Real GDP | +2.4% | +2.0% | -1.7% | -3.7% |
IT Spending | +5.1% | +4.3% | -2.7% | -5.1% |
Source: IDC Worldwide Black Book Live Edition, April 2020 |
Note: IT Spending growth at constant currency
Overall spending on devices including PCs and phones will be down significantly this year and is the main drag on total IT spending with the economic fallout likely to disrupt upgrade cycles for smartphones, which were expected to be boosted by the launch of premium 5G devices. The PC market was already expected to decline this year after a commercial refresh cycle in 2019, leaving discretionary upgrades to new notebooks and tablets extremely vulnerable to any period of economic decline.
Infrastructure spending, on the other hand, is still expected to post moderate growth overall as businesses continue to fund existing cloud deployments while some may even look to accelerate their cloud projects during the remainder of the year as a means to control costs and defer capital spending on upgrades to on-premise datacenters and applications.
"Where there is growth, most of it is in the cloud," said Minton. "Overall software spending is now expected to decline as businesses delay new projects and application roll-outs, while there is a fundamental link between employment and spending on things like software licenses and campus networks. On the other hand, the amount of data that companies must store and manage is not going anywhere. Increasingly, even more of that data will be stored, managed, and increasingly also analysed in the cloud."
% YoY Growth | 2019 | 2020 |
Devices | +0.9% | -12.4% |
Infrastructure | +8.8% | +3.8% |
Software | +10.0% | -1.9% |
IT Services | +4.7% | -2.6% |
IT Spending | +5.0% | -5.1% |
Source: IDC Worldwide Black Book Live Edition, April 2020 |
Notes: IT Spending growth at constant currency.
Infrastructure includes server/storage/network hardware and cloud services
IT services spending will decline, mostly due to delays in big new projects, but a large portion of services revenue will be relatively protected from spending cuts where it relates to the management, support, and operations of technology, which is now fundamental to business performance and viability. At the same time, many companies are also reluctant to reverse course on digital transformation, which is central to business strategy.
"IT spending is very uneven right now with businesses dealing with the type of crisis that was not envisaged in many contingency plans," said Minton. "When all is said and done, we expect to find that early adopters of cloud and other digital technologies were best positioned to ride out this kind of storm with the least amount of disruption from an operational perspective, even if the direct impact on revenue is still more affected by external factors that no CEO or CIO saw coming."
Telecom spending will decline by almost 1%, which is relatively stable compared to other types of technology investments. Carriers will continue to invest in 5G network deployments in many countries, while the lockdown has increased demand for fixed broadband services in the short term. The economic fallout will put some macro pressure on consumer spending, including upgrades to 5G mobile contracts, in the second half of 2020, but the overall impact on telecom spending will be moderate compared to other ICT markets.
% YoY Growth | 2019 | 2020 | 2021 |
IT Spending | +5.0% | -5.1% | +5.0% |
Telecom Spending | +0.5% | -0.8% | +0.7% |
ICT Spending | +3.5% | -3.4% | +3.1% |
Source: IDC Worldwide Black Book Live Edition, April 2020 |
Note: IT Spending growth at constant currency.
Worldwide revenue for the unified communications & collaboration (UC&C) market reached $38.8 billion in 2019, representing year-over-year growth of 17.7% according to the International Data Corporation (IDC) Worldwide Unified Communications & Collaboration (UC&C) QView. The QView provides a comprehensive view of the current market, reporting both revenue and shipments of hardware, software, and cloud-based services for dozens of vendors in the UC&C space.
Market highlights for Q4 2019 include the following:
Regional highlights for Q4 2019 are as follows:
Key metrics for vendors such as 8x8, ALE, Avaya, BlueJeans, Cisco, Google, Huawei, Logitech, Microsoft, Mitel, NEC, Poly, RingCentral, Slack, Unify, Vonage, Yealink, and Zoom, among many others, are included in the QView.
"This enhanced QView provides a comprehensive view of the UC&C market and vendors in terms of both revenue and shipments, as well as UC&C technology segmentation," said Rich Costello, senior research analyst, Unified Communications & Collaboration. "Areas of particular interest and adoption today, especially in light of the current pandemic, are well-represented in this IDC view of the global market, including cloud-based voice/UC, videoconferencing, and collaboration, among others."
International Data Corporation's (IDC) EMEA Server Tracker shows that in the fourth quarter of 2019 the EMEA server market reported a year-on-year decrease in vendor revenues of 5.1% to $4.6 billion and a YoY decrease of 5.2% in units shipped to around 550,000. The top 5 vendors in EMEA and their revenues for the quarter are displayed in the table below.
Top 5 EMEA Vendor Revenues ($M)
Vendor | 4Q18 Server Revenue | 4Q19 Server Revenue | 4Q18 Market Share | 4Q19 Market Share | 4Q18/4Q19 Revenue Growth |
HPE | $1,065.2 | $843.1 | 28% | 23% | -21% |
Dell EMC | $885.4 | $799.4 | 23% | 22% | -10% |
IBM | $429.2 | $542.0 | 11% | 15% | 26% |
ODM Direct | $533.4 | $525.6 | 14% | 14% | -1% |
Lenovo | $265.6 | $244.7 | 7% | 7% | -8% |
Others | $684.1 | $697.7 | 18% | 19% | 2% |
Total | $3,863.0 | $3,652.3 | 100% | 100% | -5% |
Source: IDC Quarterly Server Tracker, 4Q19
When viewing the EMEA market by product detail, the standout contributor to the quarter's growth was custom multinode units, which grew 22.8% YoY in units and 9.9% YoY in revenues. ODM direct vendors continue to perform strongly in this product segment, but most interesting is Lenovo's entrance to the market and its clear objective to penetrate the hyperscale market. Large systems continue to show strength, growing a further 35.1% YoY in revenue, though this is down to a strong refresh cycle for IBM.
"IDC believes COVID-19 will slow the spending on high-end server systems," said Eckhardt Fischer, senior research analyst in the European Infrastructure group, "and although standard rack-optimized shipments saw a slight decrease of 1.2% YoY in terms of units and a decline of 6.3% YoY in revenues, there is strong growth in certain portions of the market as it looks to find efficient workarounds to the current situation. Hyperconverged systems [HCIs] saw shipments grow 47.1% YoY and revenues increase 36.1% YoY in EMEA."
Although average selling prices (ASPs) continued to weaken thanks to increased CPU competition and decreasing memory prices, IDC believes that supply issues due to manufacturing issues could push ASPs back up again.
IDC expects economic fallout from the pandemic to affect all IT hardware markets. Spending on traditional IT infrastructure is forecast to contract 16.4% year on year. Investments in cloud infrastructure hardware, however, are projected to increase 10.4% in 2020, reaching $11.6 billion. This segment is expected to claim a larger share of the overall IT hardware market than previously forecast.
"We're already seeing a spike in pandemic-related demand, particularly among telecommunications companies and digital B2C service providers," said IDC's Kamil Gregor, senior research analyst for European enterprise infrastructure. "This stems from European companies asking their employees to work from home and digital services' customers spending more time online. A lot of spending will be on infrastructure to support cloud-delivered services like unified business communications, including video streaming."
Regional Highlights
Segmenting at a Western European level, Germany produced the strongest performance with 5.8% and 5.2% YoY unit and revenue growth respectively. A few countries, such as Ireland and the Netherlands, were buoyed by continued hyperscale datacenter investments. France had a relatively good quarter with a 19.1% YoY increase in server revenue. With around $1,000 million in revenue, Germany maintained its position as the region's largest market.
"Central and Eastern Europe, the Middle East, and Africa [CEMA] server revenue declined for a second consecutive quarter in 4Q19, down by 3.6% year over year to $1,045.84 million," said Jiri Helebrand, research manager, IDC CEMA. "The overall decline in revenue can be attributed to weaker sales of x86 servers on the back of slowing economies in the region and the fact that 4Q18 makes for a challenging comparison as server sales were the second largest in history. The Central and Eastern Europe [CEE] subregion grew by 2.7% year over year with revenue of $615.64 million. Slovakia, Hungary, and Russia recorded the strongest growth, with Russia benefitting from increased purchases by local cloud providers as well as solid demand from financial and telecommunication sectors. The Middle East and Africa [MEA] subregion declined by 11.4% year over year to $430.19 million in 4Q19 as we observed lower demand for traditional IT infrastructure from enterprise buyers. Qatar, Kenya, and Bahrain bucked the trend and were the only countries to record double-digit growth in MEA, with Qatar and Kenya benefiting from several IT projects in the public sector."
Taxonomy Changes
Modular server category: Server form factors have been amended to include the new "modular" category that encompasses today's blade servers and density-optimized servers (which are being renamed multinode servers). As the differentiation between these two types of servers continues to become blurred, IDC is moving forward with the "modular server" category as it better reflects the directions in which vendors and the entire market are moving when it comes to server design.
Multinode (density-optimized) servers: Modular platforms that do not meet IDC's definition of a blade are classified as multinode. This was formerly called density optimized in IDC's server research and server-related tracker products.
Worldwide IT Services and Business Services revenue grew 5% year over year in 2019, according to the International Data Corporation (IDC) Worldwide Semiannual Services Tracker (growth in nominal dollar denominated revenue in today's exchange rate was 2.4%, due to dollar's appreciation in 2019).
This represents the second consecutive year the market has accelerated since 2017 (from 4% growth in 2017 to 4.2% in 2018 and 5% in 2019) despite a cooling economy (2019 world GDP slowed to just above 3%). Large services vendors also reported stronger bookings and book-to-bill ratios mostly above 1, portending buyers' overall optimism, as well as their appetite for more digital transformation.
However, ravaged by the COVID-19 pandemic and comorbid economic malaise, the outlook has turned. This is the first time since the two world wars that the global economy has been disrupted both by demand and supply at such a scale.
Considering the impacts, IDC forecasts the worldwide services market will decline 1.1% in 2020 and grow just over 1% in 2021. The new forecast is based on the Economist Intelligence Unit's projection that the real 2020 GDP will likely contract by more than 2%, with a sharp decline in Q1 and Q2 offset by recovery in the second half of the year.
The impact on the supply side, at least for enterprise services, will be relatively small. Providers are quickly adopting the "new norms" of working remotely and social distancing. The COVID-19 crisis will also tip organizations and consumers over to the online world sooner. As a result, it may improve productivity and open new opportunities.
The demand-side shock and uncertainty will have a bigger and deeper impact. Most regions will contract somewhat in 2020 but with different severities: overall, Asia/Pacific will continue to grow, while the Americas will contract slightly in 2020. Europe, the Middle East and Africa (EMEA) will be the most negatively impacted.
The Americas services market will contract by 0.2% this year, down from the 5.2% growth experienced in 2019. IDC expects it to bounce back to growth in 2021 and eventually get to 3% or more. But the five-year compound annual growth rate (CAGR) will be considerably lower than previously forecast. This will be driven largely by the US market, which will remain flat in 2020 – tapering slightly from revenues of $485.6 billion in 2019 to $484.7 billion this year, or a 0.18% decline. US government and private sectors are putting off new projects to delay discretionary spending decisions due to market uncertainty. Project-oriented markets, such as consulting, custom application development, systems integration, etc. are expected to suffer short-term revenue downfalls. Growth in this segment is expected to be just 0.4% in 2020, down from last year's 7.4%. Managed services are expected to shrink slightly (-0.7%), and support services will be flat (both were slower growing markets to begin with).
Canada's 2020 market size is also expected to shrink in 2020. This is partially offset by slight growth in Latin America, although it represents a sharp deceleration from the region's 7.2% growth last year, amidst the demand shock of local shutdowns, currency depreciation, and China's weak demand for commodities, among other factors.
The EMEA region is forecast to contract by 4.3% in 2020, down from 2019's 4.4% growth, and will not likely return to positive territory until 2022. However, IDC expects different countries and sub-regions to recover at very different rates.
With major Western European countries reporting high numbers of COVID-19 cases and fatalities and bracing themselves for a major recession from a prolonged shutdown, business leaders will focus primarily on preserving cash, which will impact almost all foundation markets in the near term. In Central and Eastern Europe (CEE), most markets are expected to contract similarly, ending two years of fast growth. Russia's market, which accounts for almost one third of the CEE market, however, is being severely impacted by the oil price shock and will decline by 20% this year. The Middle East & Africa market will also shrink moderately largely due to falling oil prices.
Asia/Pacific will decelerate and is expected to grow by only by 1.9% this year, down from 5.5% last year. The biggest impact will be felt in China. IDC expects the China services market to grow 2.4% in 2019, down from 7.6% in 2019, assuming a robust economic rebound in the second half of 2020 and government stimulus will offset the crisis earlier this year. For the rest of the region, the market is forecast to slow to 4.6% growth this year and 4.5% next year, down slightly from 5.9% last year. Compared to other regions, reports of confirmed cases and fatalities suggest that the pandemic has been better contained in the region; therefore, the outlook for the region's growth potential is more optimistic.
"The COVID-19 pandemic is a demand shock on the services market worldwide," said Lisa Nagamine, research manager with IDC's Worldwide Semiannual Services Tracker, "but it will present different challenges, as well as opportunities, to different regions, industries, services offerings, as well as services providers."
"It will also have a profound long-term impact on our global supply chain," said Xiao-Fei Zhang, program director, Global Services Markets and Trends. "After the dust settles, services vendors may find their client portfolio changed, as well clients' priorities. They will need to re-align their digital capabilities to the 'new norm.'"
The global traditional PC market, comprised of desktops, notebooks, and workstations, declined 9.8% year over year in the first quarter of 2020 (1Q20), reaching a total of 53.2 million shipments according to preliminary results from the International Data Corporation (IDC) Worldwide Quarterly Personal Computing Device Tracker. The stark decline after a year of growth in 2019 was the result of reduced supply due to the outbreak of COVID-19 in China, the world's largest supplier of PCs.
While production capacity in January was pretty much on par with past years, the extended closure of factories in February and the slow resumption of manufacturing along with difficulties in logistics and labor towards the end of the quarter led to a reduction of supply. Meanwhile, demand rose during the quarter as many employees needed to upgrade their PCs to work from home and consumers sought gaming PCs to keep themselves entertained.
"Though supply of new PCs was somewhat limited during the quarter, a few vendors and retailers were able to keep up with the additional demand as the threat of increased tariffs last year led to some inventory stockpiling at the end of 2019," said Jitesh Ubrani research manager for IDC's Mobile Device Trackers. "However, this bump in demand may be short lived as many fear the worst is yet to come and this could lead to both consumers and businesses tightening spending in the coming months."
"IDC believes there will be longstanding positive consequences once the dust settles," said Linn Huang, research vice president, Devices and Displays at IDC. "Businesses that once primarily kept their users on campus will have to invest in remote infrastructure, at the very least, for continuity purposes. Consumers stuck at home have had to come to terms with how important it is to keep tech up to date. This should provide a steady, long-range tailwind for PC and monitor markets, among other categories."
Regional Highlights
Asia/Pacific (excluding Japan) (APeJ): Traditional PC shipments posted a double-digit decline in 1Q20. The closure of factories in China due to the COVID-19 outbreak resulted in a supply-side disruption throughout the region, while demand was impacted severely in China due to the suspension of business activities in the most affected provinces. As the pandemic spread throughout the world, most of the Asia/Pacific countries progressed into a partial or full closure by the second half of March, with non-essential activities suspended and business operations halted. Even though there was a short-term spike in demand for PCs due to work from home and e-learning, IDC expects a significant negative impact on demand, extending several months or even quarters.
Canada: The Traditional PC market posted growth for the 15th consecutive quarter with several vendors managing to capitalize in these unique times. Strategic purchases for Windows 10 upgrades, government year-end, and to counteract possible component shortages were quickly consumed by the rush to address working and learning from home. Inventory levels in all areas of the channel have been decimated to meet this demand. As many retail locations and businesses close the need to replenish, inventory will fade as will the ability to receive goods through all levels of the supply chain and channel.
Europe, Middle East, and Africa (EMEA): Traditional PC shipments saw a single-digit year-over-year decline after three consecutive quarters of growth, driven by both desktops and notebooks. The lower than expected performance is attributed to the global pandemic. Despite strong PC demand from SMBs and an additional surge in demand for notebooks stemming from work or study at home amidst severe lockdown across the region, a constrained supply chain was primarily responsible for the decline.
Japan: Commercial and consumer markets were expected to mark strong growth in 1Q20, led by demands for Windows 10 migration. However, the supply chain constraints created by COVID-19 drove the overall Japan PC market into a decline.
Latin America: The Traditional PC market showed a slightly more pronounced contraction than expected. The biggest contractions were reflected in notebook devices, in both the consumer and commercial segments, due to important deliveries (principally for education and government deals) that had been postponed.
United States: While the Traditional PC market saw growth for much of 2019, the first quarter of 2020 produced a significant drop in shipments in the US. Current volume estimates show a year-over-year decline of 4%, which would mark this as the lowest quarterly shipment volume seen in more than a decade. While the desktop market is expected to maintain low single digit year-over-year growth, the notebook market is expected to contract by upwards of 8%.
Company Highlights
Lenovo once again managed to capture the leading position despite declining 4.3% during the quarter. Excluding the Asia/Pacific region and Japan, the company managed to grow across all the other regions thanks to increased demand stemming from new work from home policies.
HP Inc. finished the quarter in second place while declining 13.8% year over year during the quarter. Despite the company's scale and brand recognition, it was unable to secure enough supply during the quarter leading to a slight reduction in share.
Dell Technologies once again ranked third overall. This was a rather successful quarter for the company as it was one of the few companies that managed to grow during the quarter—up 1.1% year over year—thanks to strong relationships with the supply chain.
Acer Group rose to fourth place with close to 3.4 million units shipped in the quarter. By pulling in inventory ahead of the shutdown in February, the company was able to negate some of the ill effects of the supply disruption. A strong gaming portfolio as well as success in the Chromebook market helped the company rise up the ranks.
Apple saw its Mac volumes decline by 20.7% year over year, one of the largest drops in recent history as almost all of its manufacturing is based in China and the company was one of the hardest hit by the shutdown of factories.
Top 5 Companies, Worldwide Traditional PC Shipments, Market Share, and Year-Over-Year Growth, Q1 2020 (Preliminary results, shipments are in thousands of units) | |||||
Company | 1Q20 Shipments | 1Q20 Market Share | 1Q19 Shipments | 1Q19 Market Share | 1Q20/1Q19 Growth |
1. Lenovo | 12,830 | 24.1% | 13,413 | 22.7% | -4.3% |
2. HP Inc. | 11,701 | 22.0% | 13,573 | 23.0% | -13.8% |
3. Dell Technologies | 10,495 | 19.7% | 10,379 | 17.6% | 1.1% |
4. Acer Group | 3,364 | 6.3% | 3,733 | 6.3% | -9.9% |
5. Apple | 3,092 | 5.8% | 3,896 | 6.6% | -20.7% |
Others | 11,757 | 22.1% | 14,019 | 23.8% | -16.1% |
Total | 53,238 | 100.0% | 59,013 | 100.0% | -9.8% |
Source: IDC Quarterly Personal Computing Device Tracker, April 13, 2020 |